id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
5676 | https://en.wikipedia.org/wiki/Californium | Californium | Californium is a synthetic chemical element; it has symbol Cf and atomic number 98. It was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). It was named after the university and the U.S. state of California.
Two crystalline forms exist at normal pressure: one above and one below . A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory (ORNL) in the United States and Research Institute of Atomic Reactors in Russia.
Californium is one of the few transuranium elements with practical uses. Most of these applications exploit the fact that certain isotopes of californium emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is used as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. It can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue.
Characteristics
Physical properties
Californium is a silvery-white actinide metal with a melting point of and an estimated boiling point of . The pure metal is malleable and is easily cut with a knife. Californium metal starts to vaporize above when exposed to a vacuum. Below californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about the resulting materials.
The element has two crystalline forms at standard atmospheric pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm3 and the β form exists above 600–800 °C with a density of 8.74 g/cm. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond.
The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is , which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa).
Chemical properties and compounds
Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents.
The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid.
Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate.
Isotopes
Twenty isotopes of californium are known (mass number ranging from 237 to 256); the most stable are Cf with half-life 898 years, Cf with half-life 351 years, Cf at 13.08 years, and Cf at 2.645 years. All other isotopes have half-life shorter than a year, and most of these have half-lives less than 20 minutes.
Cf is formed by beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Though californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section).
Cf is a very strong neutron emitter, which makes it extremely radioactive and harmful. Cf, 96.9% of the time, alpha decays to curium-248; the other 3.1% of decays are spontaneous fission. One microgram (μg) of Cf emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most other isotopes of californium, alpha decay to curium (atomic number 96).
History
Californium was first made at University of California Radiation Laboratory, Berkeley, by physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg, about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950.
To produce californium, a microgram-size target of curium-242 () was bombarded with 35 MeV alpha particles () in the cyclotron at Berkeley, which produced californium-245 () plus one free neutron ().
+ → +
To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes.
The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above element 98 in the periodic table, dysprosium, has a name that means "hard to get at", so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California".
Weighable amounts of californium were first produced by the irradiation of plutonium targets at Materials Testing Reactor at National Reactor Testing Station, eastern Idaho; these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes Cf to Cf were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid.
The High Flux Isotope Reactor (HFIR) at ORNL in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, HFIR nominally produced of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for making californium.
The Atomic Energy Commission sold Cf to industrial and academic customers in the early 1970s for $10/microgram, and an average of of Cf were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films.
Occurrence
Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles.
Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium-249, -252, -253, and -254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities.
Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56.
The transuranic elements americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008.
Production
Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 (Bk) with neutrons, forming berkelium-250 (Bk) via neutron capture (n,γ) which, in turn, quickly beta decays (β) to californium-250 (Cf) in the following reaction:
(n,γ) → + β
Bombardment of Cf with neutrons produces Cf and Cf.
Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of Cf and microgram amounts of Cf. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce mainly californium-252 with lesser amounts of isotopes 249 to 255.
Microgram quantities of Cf are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce Cf: Oak Ridge National Laboratory in the U.S., and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of Cf per year, respectively.
Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Cf is at the end of a production chain that starts with uranium-238, and includes several isotopes of plutonium, americium, curium, and berkelium, and the californium isotopes 249 to 253 (see diagram).
Applications
Californium-252 has a number of specialized uses as a strong neutron emitter; it produces 139 million neutrons per microgram per minute. This property makes it useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are used as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when Georgia Institute of Technology got a loan of 119 μg of Cf from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries.
Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use Cf to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The main uses of Cf in 1982 were, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most Cf was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but secondary uses. In 2021, fast neutrons from Cf were used for wireless data transmission.
Cf has a very small calculated critical mass of about , high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element.
In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at Joint Institute for Nuclear Research in Dubna, Russia, from bombarding Cf with calcium-48, making it the heaviest element ever made. The target contained about 10 mg of Cf deposited on a titanium foil of 32 cm area. Californium has also been used to produce other transuranic elements; for example, lawrencium was first synthesized in 1961 by bombarding californium with boron nuclei.
Precautions
Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment.
Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone.
The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer.
| Physical sciences | Chemical elements_2 | null |
5691 | https://en.wikipedia.org/wiki/Codex | Codex | The codex (: codices ) was the historical ancestor format of the modern book. Technically, the vast majority of modern books use the codex format of a stack of pages bound at one edge, along the side of the text. But the term "codex" is now reserved for older manuscript books, which mostly used sheets of vellum, parchment, or papyrus, rather than paper.
By convention, the term is also used for any Aztec codex (although the earlier examples do not actually use the codex format), Maya codices and other pre-Columbian manuscripts. Library practices have led to many European manuscripts having "codex" as part of their usual name, as with the Codex Gigas, while most do not.
Modern books are divided into paperback (or softback) and those bound with stiff boards, called hardbacks. Elaborate historical bindings are called treasure bindings. At least in the Western world, the main alternative to the paged codex format for a long document was the continuous scroll, which was the dominant form of document in the ancient world. Some codices are continuously folded like a concertina, in particular the Maya codices and Aztec codices, which are actually long sheets of paper or animal skin folded into pages. In Japan, concertina-style codices made of paper and called orihon were developed during the Heian period (794–1185).
The ancient Romans developed the form from wax tablets. The gradual replacement of the scroll by the codex has been called the most important advance in book making before the invention of the printing press. The codex transformed the shape of the book itself, and offered a form that has lasted ever since. The spread of the codex is often associated with the rise of Christianity, which early on adopted the format for the Bible. First described in the 1st century of the Common Era, when the Roman poet Martial praised its convenient use, the codex achieved numerical parity with the scroll around 300 CE, and had completely replaced it throughout what was by then a Christianized Greco-Roman world by the 6th century.
Etymology and origins
The word codex comes from the Latin word caudex, meaning "trunk of a tree", "block of wood" or "book". The codex began to replace the scroll almost as soon as it was invented, although new finds add three centuries to its history (see below). In Egypt, by the fifth century, the codex outnumbered the scroll by ten to one based on surviving examples. By the sixth century, the scroll had almost vanished as a medium for literature. The change from rolls to codices roughly coincides with the transition from papyrus to parchment as the preferred writing material, but the two developments are unconnected. In fact, any combination of codices and scrolls with papyrus and parchment is technically feasible and common in the historical record.
Technically, even modern notebooks and paperbacks are codices, but publishers and scholars reserve the term for manuscript (hand-written) books produced from late antiquity until the Middle Ages. The scholarly study of these manuscripts is sometimes called codicology. The study of ancient documents in general is called paleography.
The codex provided considerable advantages over other book formats, primarily its compactness, sturdiness, economic use of materials by using both sides (recto and verso), and ease of reference (a codex accommodates random access, as opposed to a scroll, which uses sequential access).
History
The Romans used precursors made of reusable wax-covered tablets of wood for taking notes and other informal writings. Two ancient polyptychs, a pentaptych and octoptych excavated at Herculaneum, used a unique connecting system that presages later sewing on of thongs or cords. A first evidence of the use of papyrus in codex form comes from the Ptolemaic period in Egypt, as a find at the University of Graz shows.
Julius Caesar may have been the first Roman to reduce scrolls to bound pages in the form of a note-book, possibly even as a papyrus codex. At the turn of the 1st century AD, a kind of folded parchment notebook called pugillares membranei in Latin became commonly used for writing in the Roman Empire. Theodore Cressy Skeat theorized that this form of notebook was invented in Rome and then spread rapidly to the Near East.
Codices are described in certain works by the Classical Latin poet, Martial. He wrote a series of five couplets meant to accompany gifts of literature that Romans exchanged during the festival of Saturnalia. Three of these books are specifically described by Martial as being in the form of a codex; the poet praises the compendiousness of the form (as opposed to the scroll), as well as the convenience with which such a book can be read on a journey. In another poem by Martial, the poet advertises a new edition of his works, specifically noting that it is produced as a codex, taking less space than a scroll and being more comfortable to hold in one hand. According to Theodore Cressy Skeat, this might be the first recorded known case of an entire edition of a literary work (not just a single copy) being published in codex form, though it was likely an isolated case and was not a common practice until a much later time.
In his discussion of one of the earliest parchment codices to survive from Oxyrhynchus in Egypt, Eric Turner seems to challenge Skeat's notion when stating, "its mere existence is evidence that this book form had a prehistory", and that "early experiments with this book form may well have taken place outside of Egypt." Early codices of parchment or papyrus appear to have been widely used as personal notebooks, for instance in recording copies of letters sent (Cicero Fam. 9.26.1). Early codices were not always cohesive. They often contained multiple languages, various topics and even multiple authors. "Such codices formed libraries in their own right." The parchment notebook pages were "more durable, and could withstand being folded and stitched to other sheets". Parchments whose writing was no longer needed were commonly washed or scraped for re-use, creating a palimpsest; the erased text, which can often be recovered, is older and usually more interesting than the newer text which replaced it. Consequently, writings in a codex were often considered informal and impermanent. Parchment (animal skin) was expensive, and therefore it was used primarily by the wealthy and powerful, who were also able to pay for textual design and color. "Official documents and deluxe manuscripts [in the late Middle Ages] were written in gold and silver ink on parchment...dyed or painted with costly purple pigments as an expression of imperial power and wealth."
As early as the early 2nd century, there is evidence that a codex—usually of papyrus—was the preferred format among Christians. In the library of the Villa of the Papyri, Herculaneum (buried in AD 79), all the texts (of Greek literature) are scrolls (see Herculaneum papyri). However, in the Nag Hammadi library, hidden about AD 390, all texts (Gnostic) are codices. Despite this comparison, a fragment of a non-Christian parchment codex of Demosthenes' De Falsa Legatione from Oxyrhynchus in Egypt demonstrates that the surviving evidence is insufficient to conclude whether Christians played a major or central role in the development of early codices—or if they simply adopted the format to distinguish themselves from Jews.
The earliest surviving fragments from codices come from Egypt, and are variously dated (always tentatively) towards the end of the 1st century or in the first half of the 2nd. This group includes the Rylands Library Papyrus P52, containing part of St John's Gospel, and perhaps dating from between 125 and 160.
In Western culture, the codex gradually replaced the scroll. Between the 4th century, when the codex gained wide acceptance, and the Carolingian Renaissance in the 8th century, many works that were not converted from scroll to codex were lost. The codex improved on the scroll in several ways. It could be opened flat at any page for easier reading, pages could be written on both front and back (recto and verso), and the protection of durable covers made it more compact and easier to transport.
The ancients stored codices with spines facing inward, and not always vertically. The spine could be used for the incipit, before the concept of a proper title developed in medieval times. Though most early codices were made of papyrus, the material was fragile and supplied from Egypt, the only place where papyrus grew. The more durable parchment and vellum gained favor, despite the cost.
The codices of pre-Columbian Mesoamerica (Mexico and Central America) had a similar appearance when closed to the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the amatl paper. There are significant codices produced in the colonial era, with pictorial and alphabetic texts in Spanish or an indigenous language such as Nahuatl.
In East Asia, the scroll remained standard for far longer than in the Mediterranean world. There were intermediate stages, such as scrolls folded concertina-style and pasted together at the back and books that were printed only on one side of the paper. This replaced traditional Chinese writing mediums such as bamboo and wooden slips, as well as silk and paper scrolls. The evolution of the codex in China began with folded-leaf pamphlets in the 9th century, during the late Tang dynasty (618–907), improved by the 'butterfly' bindings of the Song dynasty (960–1279), the wrapped back binding of the Yuan dynasty (1271–1368), the stitched binding of the Ming (1368–1644) and Qing dynasties (1644–1912), and finally the adoption of Western-style bookbinding in the 20th century. The initial phase of this evolution, the accordion-folded palm-leaf-style book, most likely came from India and was introduced to China via Buddhist missionaries and scriptures.
Judaism still retains the Torah scroll, at least for ceremonial use.
From scrolls to codices
Among the experiments of earlier centuries, scrolls were sometimes unrolled horizontally, as a succession of columns. The Dead Sea Scrolls are a famous example of this format, and it is the standard format for Jewish Torah scrolls made to this day for ritual use. This made it possible to fold the scroll as an accordion. The next evolutionary step was to cut the folios and sew and glue them at their centers, making it easier to use the papyrus or vellum recto-verso as with a modern book.
Traditional bookbinders would call one of these assembled, trimmed and bound folios (that is, the "pages" of the book as a whole, comprising the front matter and contents) a codex in contradistinction to the cover or case, producing the format of book now colloquially known as a hardcover. In the hardcover bookbinding process, the procedure of binding the codex is very different to that of producing and attaching the case.
Preparation
The first stage in creating a codex is to prepare the animal skin. The skin is washed with water and lime but not together. The skin is soaked in the lime for a couple of days. The hair is removed, and the skin is dried by attaching it to a frame, called a herse. The parchment maker attaches the skin at points around the circumference. The skin attaches to the herse by cords. To prevent it from being torn, the maker wraps the area of the skin attached to the cord around a pebble called a pippin. After completing that, the maker uses a crescent shaped knife called a lunarium or lunellum to remove any remaining hairs. Once the skin completely dries, the maker gives it a deep clean and processes it into sheets. The number of sheets from a piece of skin depends on the size of the skin and the final product dimensions. For example, the average calfskin can provide three-and-a-half medium sheets of writing material, which can be doubled when they are folded into two conjoint leaves, also known as a bifolium. Historians have found evidence of manuscripts in which the scribe wrote down the medieval instructions now followed by modern membrane makers. Defects can often be found in the membrane, whether they are from the original animal, human error during the preparation period, or from when the animal was killed. Defects can also appear during the writing process. Unless the manuscript is kept in perfect condition, defects can also appear later in its life.
Preparation of pages for writing
Firstly, the membrane must be prepared. The first step is to set up the quires. The quire is a group of several sheets put together. Raymond Clemens and Timothy Graham point out, in "Introduction to Manuscript Studies", that "the quire was the scribe's basic writing unit throughout the Middle Ages":
Pricking is the process of making holes in a sheet of parchment (or membrane) in preparation of it ruling. The lines were then made by ruling between the prick marks.... The process of entering ruled lines on the page to serve as a guide for entering text. Most manuscripts were ruled with horizontal lines that served as the baselines on which the text was entered and with vertical bounding lines that marked the boundaries of the columns.
Forming quire
From the Carolingian period to the end of the Middle Ages, different styles of folding the quire came about. For example, in continental Europe throughout the Middle Ages, the quire was put into a system in which each side folded on to the same style. The hair side met the hair side and the flesh side to the flesh side. This was not the same style used in the British Isles, where the membrane was folded so that it turned out an eight-leaf quire, with single leaves in the third and sixth positions. The next stage was tacking the quire. Tacking is when the scribe would hold together the leaves in quire with thread. Once threaded together, the scribe would then sew a line of parchment up the "spine" of the manuscript to protect the tacking.
Materials
The materials codices are made with are their support, and include papyrus, parchment (sometimes referred to as membrane or vellum), and paper. They are written and drawn on with metals, pigments, and ink. The quality, size, and choice of support determine the status of a codex. Papyrus is found only in late antiquity and the Early Middle Ages. Codices intended for display were bound with more durable materials than vellum. Parchment varied widely due to animal species and finish, and identification of animals used to make it has only begun to be studied in the 21st century. How manufacturing influenced the final products, technique, and style, is little understood. However, changes in style are underpinned more by variation in technique. Before the 14th and 15th centuries, paper was expensive, and its use may mark off the deluxe copy.
Structure
The structure of a codex includes its size, format/ordinatio (its quires or gatherings), consisting of sheets folded a number of times, often twice- a bifolio, sewing, bookbinding, and rebinding. A quire consisted of a number of folded sheets inserting into one another- at least three, but most commonly four bifolia, that is eight sheets and sixteen pages: Latin quaternio or Greek tetradion, which became a synonym for quires. Unless an exemplar (text to be copied) was copied exactly, format differed. In preparation for writing codices, ruling patterns were used that determined the layout of each page. Holes were prickled with a spiked lead wheel and a circle. Ruling was then applied separately on each page or once through the top folio. Ownership markings, decorations, and illumination are also a part of it. They are specific to the scriptoria, or any production center, and libraries of codices.
Pages
Watermarks may provide, although often approximate, dates for when the copying occurred. The layout (size of the margin and the number of lines) is determined. There may be textual articulations, running heads, openings, chapters, and paragraphs. Space was reserved for illustrations and decorated guide letters. The apparatus of books for scholars became more elaborate during the 13th and 14th centuries when chapter, verse, page numbering, marginalia finding guides, indexes, glossaries, and tables of contents were developed.
The libraire
By a close examination of the physical attributes of a codex, it is sometimes possible to match up long-separated elements originally from the same book. In 13th-century book publishing, due to secularization, stationers or libraires emerged. They would receive commissions for texts, which they would contract out to scribes, illustrators, and binders, to whom they supplied materials. Due to the systematic format used for assembly by the libraire, the structure can be used to reconstruct the original order of a manuscript. However, complications can arise in the study of a codex. Manuscripts were frequently rebound, and this resulted in a particular codex incorporating works of different dates and origins, thus different internal structures. Additionally, a binder could alter or unify these structures to ensure a better fit for the new binding. Completed quires or books of quires might constitute independent book units- booklets, which could be returned to the stationer, or combined with other texts to make anthologies or miscellanies. Exemplars were sometimes divided into quires for simultaneous copying and loaned out to students for study. To facilitate this, catchwords were used- a word at the end of a page providing the next page's first word.
| Technology | Printing | null |
5702 | https://en.wikipedia.org/wiki/Channel%20Tunnel | Channel Tunnel | The Channel Tunnel (), sometimes referred to by the portmanteau Chunnel, is a undersea railway tunnel, opened in 1994, that connects Folkestone (Kent, England) with Coquelles (Pas-de-Calais, France) beneath the English Channel at the Strait of Dover. It is the only fixed link between the island of Great Britain and the European mainland.
At its lowest point, it is below the sea bed and below sea level. At , it has the longest underwater section of any tunnel in the world and is the third-longest railway tunnel in the world. The speed limit for trains through the tunnel is . The tunnel is owned and operated by Getlink, formerly Groupe Eurotunnel.
The tunnel carries high-speed Eurostar passenger trains, LeShuttle services for road vehicles and freight trains. It connects end-to-end with high-speed railway lines: the LGV Nord in France and High Speed 1 in England. In 2017, rail services carried 10.3million passengers and 1.22million tonnes of freight, and the Shuttle carried 10.4million passengers, 2.6million cars, 51,000 coaches, and 1.6million lorries (equivalent to 21.3million tonnes of freight), compared with 11.7million passengers, 2.6million lorries and 2.2million cars by sea through the Port of Dover.
Plans to build a cross-Channel tunnel were proposed as early as 1802, but British political and media criticism motivated by fears of compromising national security had disrupted attempts to build one. An early unsuccessful attempt was made in the late 19th century, on the English side, "in the hope of forcing the hand of the English Government". The eventual successful project, organised by Eurotunnel, began construction in 1988 and opened in 1994. Estimated to cost £5.5 billion in 1985, it was at the time the most expensive construction project ever proposed. The cost finally amounted to £9 billion (equivalent to £ billion in ).
Since its opening, the tunnel has experienced occasional mechanical problems. Both fires and cold weather have temporarily disrupted its operation. Since at least 1997, aggregations of migrants around Calais seeking entry to the United Kingdom, such as through the tunnel, have prompted deterrence and countermeasures.
History
Earlier proposals
In 1802, Albert Mathieu-Favier, a French mining engineer, proposed a tunnel under the English Channel, with illumination from oil lamps, horse-drawn coaches, and an artificial island positioned mid-Channel for changing horses. His design envisaged a bored two-level tunnel with the top tunnel used for transport and the bottom one for groundwater flows.
In 1839, Aimé Thomé de Gamond, a Frenchman, performed the first geological and hydrographical surveys on the Channel between Calais and Dover. He explored several schemes and, in 1856, presented a proposal to Napoleon III for a mined railway tunnel from Cap Gris-Nez to East Wear Point with a port/airshaft on the Varne sandbank at a cost of 170 million francs, or less than £7 million.
In 1865, a deputation led by George Ward Hunt proposed the idea of a tunnel to the Chancellor of the Exchequer of the day, William Ewart Gladstone.
In 1866, Henry Marc Brunel made a survey of the floor of the Strait of Dover. By his results, he proved that the floor was composed of chalk, like the adjoining cliffs, and thus a tunnel was feasible. For this survey, he invented the gravity corer, which is still used in geology.
Around 1866, William Low and Sir John Hawkshaw promoted tunnel ideas, but apart from preliminary geological studies, none were implemented.
An official Anglo-French protocol was established in 1876 for a cross-Channel railway tunnel.
In 1881, British railway entrepreneur Sir Edward Watkin and Alexandre Lavalley, a French Suez Canal contractor, were in the Anglo-French Submarine Railway Company that conducted exploratory work on both sides of the Channel. From June 1882 to March 1883, the British tunnel boring machine tunnelled, through chalk, a total of , while Lavalley used a similar machine to drill from Sangatte on the French side. However, the cross-Channel tunnel project was abandoned in 1883, despite this success, after fears raised by the British military that an underwater tunnel might be used as an invasion route. Nevertheless, in 1883, this TBM was used to bore a railway ventilation tunnel— in diameter and long—between Birkenhead and Liverpool, England, through sandstone under the Mersey River. These early works were encountered more than a century later during the project TransManche Link (TML).
A 1907 film, Tunnelling the English Channel by pioneer filmmaker Georges Méliès, depicts King Edward VII and President Armand Fallières dreaming of building a tunnel under the English Channel.
In 1919, during the Paris Peace Conference, British prime minister David Lloyd George repeatedly brought up the idea of a Channel tunnel as a way of reassuring France about British willingness to defend against another German attack. The French did not take the idea seriously, and nothing came of the proposal.
In the 1920s, Winston Churchill advocated for the Channel Tunnel, using that exact name in his essay "Should Strategists Veto The Tunnel?" It was published on 27 July 1924 in the Weekly Dispatch, and argued vehemently against the idea that the tunnel could be used by a Continental enemy in an invasion of Britain. Churchill expressed his enthusiasm for the project again in an article for the Daily Mail on 12 February 1936, "Why Not A Channel Tunnel?"
There was another proposal in 1929, but nothing came of this discussion and the idea was abandoned. Proponents estimated the construction cost at US$150million. The engineers had addressed the concerns of both nations' military leaders by designing two sumps – one near the coast of each country – that could be flooded at will to block the tunnel, but this did not appease the military, or dispel concerns about hordes of tourists who would disrupt English life.
A British film from Gaumont Studios, The Tunnel (also known as TransAtlantic Tunnel), was released in 1935 as a science-fiction project concerning the creation of a transatlantic tunnel. It referred briefly to its protagonist, a Mr. McAllan, as having completed a British Channel tunnel successfully in 1940, five years into the future of the film's release.
Military fears continued during World War II. After the surrender of France, as Britain prepared for an expected German invasion, a Royal Navy officer in the Directorate of Miscellaneous Weapons Development calculated that Hitler could use slave labour to build two Channel tunnels in 18 months. The estimate caused rumours that Germany had already begun digging.
By 1955, defence arguments had become less relevant due to the dominance of air power, and both the British and French governments supported technical and geological surveys. In 1958 the 1881 workings were cleared in preparation for a £100,000 geological survey by the Channel Tunnel Study Group. 30% of the funding came from Channel Tunnel Co Ltd, the largest shareholder of which was the British Transport Commission, as successor to the South Eastern Railway. A detailed geological survey was carried out in 1964 and 1965.
Although the two countries agreed to build a tunnel in 1964, the phase 1 initial studies and signing of a second agreement to cover phase 2 took until 1973. The plan described a government-funded project to create two tunnels to accommodate car shuttle wagons on either side of a service tunnel. Construction started on both sides of the Channel in 1974.
On 20 January 1975, to the dismay of their French partners, the then-governing Labour Party in Britain cancelled the project due to uncertainty about the UK's membership of the European Economic Community, doubling cost estimates amid the general economic crisis at the time. By this time the British tunnel boring machine was ready and the Ministry of Transport had performed a experimental drive. (This short tunnel, named Adit A1, was eventually reused as the starting and access point for tunnelling operations from the British side, and remains an access point to the service tunnel.) The cancellation costs were estimated at £17million. On the French side, a tunnel-boring machine had been installed underground in a stub tunnel. It lay there for 14 years until 1988, when it was sold, dismantled, refurbished and shipped to Turkey, where it was used to drive the Moda tunnel for the Istanbul Sewerage Scheme.
Initiation of project
In 1979, the "Mouse-hole Project" was suggested when the Conservatives came to power in Britain. The concept was a single-track rail tunnel with a service tunnel but without shuttle terminals. The British government took no interest in funding the project, but British Prime Minister Margaret Thatcher did not object to a privately funded project, although she said she assumed it would be for cars rather than trains. In 1981, Thatcher and French president François Mitterrand agreed to establish a working group to evaluate a privately funded project. In June 1982 the Franco-British study group favoured a twin tunnel to accommodate conventional trains and a vehicle shuttle service. In April 1985 promoters were invited to submit scheme proposals. Four submissions were shortlisted:
Channel Tunnel, a rail proposal based on the 1975 scheme presented by Channel Tunnel Group/France–Manche (CTG/F–M).
Eurobridge, a suspension bridge with a series of spans with a roadway in an enclosed tube.
Euroroute, a tunnel between artificial islands approached by bridges.
Channel Expressway, a set of large-diameter road tunnels with mid-Channel ventilation towers.
The cross-Channel ferry industry protested using the name "Flexilink". In 1975 there was no campaign protesting a fixed link, with one of the largest ferry operators (Sealink) being state-owned. Flexilink continued rousing opposition throughout 1986 and 1987. Public opinion strongly favoured a drive-through tunnel, but concerns about ventilation, accident management and driver mesmerisation resulted in the only shortlisted rail submission, CTG/F-M, being awarded the project in January 1986. Reasons given for the selection included that it caused least disruption to shipping in the Channel and least environmental disruption, was the best protected against terrorism, and was the most likely to attract sufficient private finance.
Arrangement
The British Channel Tunnel Group consisted of two banks and five construction companies, while their French counterparts, France–Manche, consisted of three banks and five construction companies. The banks' role was to advise on financing and secure loan commitments. On 2 July 1985, the groups formed Channel Tunnel Group/France–Manche (CTG/F–M). Their submission to the British and French governments was drawn from the 1975 project, including 11 volumes and a substantial environmental impact statement.
The Anglo-French Treaty on the Channel Tunnel was signed by both governments in Canterbury Cathedral. The Treaty of Canterbury (1986) prepared the Concession for the construction and operation of the Fixed Link by privately owned companies and outlined arbitration methods to be used in the event of disputes. It established the Intergovernmental Commission (IGC), responsible for monitoring all matters associated with the Tunnel's construction and operation on behalf of the British and French governments, and a Safety Authority to advise the IGC.
It drew a land frontier between the two countries in the middle of the Channel tunnel—the first of its kind.
Design and construction were done by the ten construction companies in the CTG/F-M group. The French terminal and boring from Sangatte were done by the five French construction companies in the joint venture group GIE Transmanche Construction. The English Terminal and boring from Shakespeare Cliff were done by the five British construction companies in the Translink Joint Venture. The two partnerships were linked by a bi-national project organisation, TransManche Link (TML). The Maître d'Oeuvre was a supervisory engineering body employed by Eurotunnel under the terms of the concession that monitored the project and reported to the governments and banks.
In France, with its long tradition of infrastructure investment, the project had widespread approval. The French National Assembly approved it unanimously in April 1987, and after a public inquiry, the Senate approved it unanimously in June. In Britain, select committees examined the proposal, making history by holding hearings away from Westminster, in Kent. In February 1987, the third reading of the Channel Tunnel Bill took place in the House of Commons, and passed by 94 votes to 22. The Channel Tunnel Act gained Royal assent and passed into law in July. Parliamentary support for the project came partly from provincial members of Parliament on the basis of promises of regional Eurostar through train services that never materialised; the promises were repeated in 1996 when the contract for construction of the Channel Tunnel Rail Link was awarded.
Cost
The tunnel is a build-own-operate-transfer (BOOT) project with a concession. TML would design and build the tunnel, but financing was through a separate legal entity, Eurotunnel. Eurotunnel absorbed CTG/F-M and signed a construction contract with TML, but the British and French governments controlled final engineering and safety decisions, now managed by the Channel Tunnel Safety Authority. The British and French governments gave Eurotunnel a 55-year operating concession (from 1987; extended by 10 years to 65 years in 1993) to repay loans and pay dividends. A Railway Usage Agreement was signed between Eurotunnel, British Rail and SNCF guaranteeing future revenue in exchange for the railways obtaining half of the tunnel's capacity.
Private funding for such a complex infrastructure project was of unprecedented scale. Initial equity of £45 million was raised by CTG/F-M, increased by £206 million private institutional placement, £770 million was raised in a public share offer that included press and television advertisements, a syndicated bank loan and letter of credit arranged £5 billion. Privately financed, the total investment costs at 1985 prices were £2.6 billion. At the 1994 completion actual costs were, in 1985 prices, £4.65 billion: an 80% cost overrun. The cost overrun was partly due to enhanced safety, security, and environmental demands. Financing costs were 140% higher than forecast.
Construction
Working from both the English and French sides of the Channel, eleven tunnel boring machines (TBMs) cut through chalk marl to construct two rail tunnels and a service tunnel. The vehicle shuttle terminals are at Cheriton (part of Folkestone) and Coquelles, and are connected to the English M20 and French A16 motorways respectively.
Tunnelling commenced in 1988, and the tunnel began operating in 1994. At the peak of construction 15,000 people were employed with daily expenditure over £3 million. Ten workers, eight of them British, were killed during construction between 1987 and 1993, most in the first few months of boring.
Completion
A diameter pilot hole allowed the service tunnel to break through without ceremony on 30 October 1990. On 1 December 1990, Englishman Graham Fagg and Frenchman Phillippe Cozette broke through the service tunnel with the media watching. Eurotunnel completed the tunnel on time. (A BBC TV television commentator called Graham Fagg "the first man to cross the Channel by land for 8000 years".) The two tunnelling efforts met each other with an offset of only . A Paddington Bear soft toy was chosen by British tunnellers as the first item to pass through to their French counterparts when the two sides met.
The tunnel was officially opened, one year later than originally planned, by the French president François Mitterrand and Queen Elizabeth II, at a ceremony in Calais on 6 May 1994. The Queen travelled through the tunnel to Calais on a Eurostar train, which stopped nose to nose with the train that carried President Mitterrand from Paris. After the ceremony, President Mitterrand and the Queen travelled on Le Shuttle to a similar ceremony in Folkestone. A full public service did not start for several months. The first freight train, however, ran on 1 June 1994 and carried Rover and Mini cars being exported to Italy.
The Channel Tunnel Rail Link (CTRL), now called High Speed 1, runs from St Pancras railway station in London to the tunnel portal at Folkestone in Kent. It cost £5.8 billion. On 16 September 2003 the prime minister, Tony Blair, opened the first section of High Speed 1, from Folkestone to north Kent. On 6 November 2007, the Queen officially opened High Speed 1 and St Pancras International station, replacing the original slower link to Waterloo International railway station. High Speed 1 trains travel at up to , the journey from London to Paris taking 2 hours 15 minutes, to Brussels 1 hour 51 minutes.
In 1994, the American Society of Civil Engineers elected the tunnel as one of the seven modern Wonders of the World. In 1995, the American magazine Popular Mechanics published the results.
Opening dates
The opening was phased for various services offered as the Channel Tunnel Safety Authority, the IGC, gave permission for various services to begin at several dates over the period 1994/1995 but start-up dates were a few days later.
Engineering
Site investigation undertaken in the 20 years before construction confirmed earlier speculations that a tunnel could be bored through a chalk marl stratum. The chalk marl is conducive to tunnelling, with impermeability, ease of excavation and strength. The chalk marl runs along the entire length of the English side of the tunnel, but on the French side a length of has variable and difficult geology. The tunnel consists of three bores: two diameter rail tunnels, apart, in length with a diameter service tunnel in between. The three bores are connected by cross-passages and piston relief ducts.
The service tunnel was used as a pilot tunnel, boring ahead of the main tunnels to determine the conditions. English access was provided at Shakespeare Cliff and French access from a shaft at Sangatte. The French side used five tunnel boring machines (TBMs), and the English side six. The service tunnel uses Service Tunnel Transport System (STTS) and Light Service Tunnel Vehicles (LADOGS). Fire safety was a critical design issue.
Between the portals at Beussingue and Castle Hill the tunnel is long, with under land on the French side and on the UK side, and under sea. It is the third-longest rail tunnel in the world, behind the Gotthard Base Tunnel in Switzerland and the Seikan Tunnel in Japan, but with the longest under-sea section. The average depth is below the seabed. On the UK side, of the expected of spoil approximately was used for fill at the terminal site, and the remainder was deposited at Lower Shakespeare Cliff behind a seawall, reclaiming of land. This land was then made into the Samphire Hoe Country Park. Environmental assessment did not identify any major risks for the project, and further studies into safety, noise, and air pollution were overall positive. However, environmental objections were raised concerning a high-speed link to London.
Geology
Successful tunnelling required a sound understanding of topography and geology and the selection of the best rock strata through which to dig. The geology of this site generally consists of northeasterly dipping Cretaceous strata, part of the northern limb of the Wealden-Boulonnais dome. It has:
Continuous chalk in the cliffs on either side of the Channel, with no major faulting, as observed by Verstegan in 1605.
Four geological strata, marine sediments laid down 90–100 million years ago; pervious Upper and Middle Chalk above slightly pervious Lower Chalk and finally impermeable Gault Clay. There is a sandy stratum of Glauconitic marl (tortia), between the chalk marl and the gault clay.
A layer of chalk marl (French: craie bleue) in the lower third of the lower chalk appeared to present the best tunnelling medium. The chalk has a clay content of 30–40% providing impermeability to groundwater yet relatively easy excavation with strength allowing minimal support. Ideally, the tunnel would be bored in the bottom of the chalk marl, allowing water inflow from fractures and joints to be minimised, but above the gault clay that would increase stress on the tunnel lining and swell and soften when wet.
On the English side, the stratum dip is less than 5°; on the French side, this increases to 20°. Jointing and faulting are present on both sides. On the English side, only minor faults of displacement less than exist; on the French side, displacements of up to are present owing to the Quenocs anticlinal fold. The faults are of limited width, filled with calcite, pyrite and remolded clay. The increased dip and faulting restricted the selection of routes on the French side. To avoid confusion, microfossil assemblages were used to classify the chalk marl. On the French side, particularly near the coast, the chalk was harder, more brittle and more fractured than on the English side. This led to the adoption of different tunnelling techniques on the two sides.
The Quaternary undersea valley Fosse Dangeard, and Castle Hill landslip at the English portal, caused concerns. Identified by the 1964–1965 geophysical survey, the Fosse Dangeard is an infilled valley system extending below the seabed, south of the tunnel route in mid-channel. A 1986 survey showed that a tributary crossed the path of the tunnel, and so the tunnel route was made as far north and deep as possible. The English terminal had to be located in the Castle Hill landslip, which consists of displaced and tipping blocks of lower chalk, glauconitic marl and gault debris. Thus the area was stabilised by buttressing and inserting drainage adits. The service tunnel acted as a pilot preceding the main ones, so that the geology, areas of crushed rock, and zones of high water inflow could be predicted. Exploratory probing was done in the service tunnel, in the form of extensive forward probing, vertical downward probes and sideways probing.
Site investigation
Marine soundings and samplings were made by Thomé de Gamond in 1833–67, establishing the seabed depth at a maximum of and the continuity of geological strata (layers). Surveying continued for many years, with 166 marine and 70 land-deep boreholes being drilled and more than 4,000linekilometres of the marine geophysical survey completed. Surveys were undertaken in 1958–1959, 1964–1965, 1972–1974 and 1986–1988.
The surveying in 1958–1959 catered for immersed tube and bridge designs, as well as a bored tunnel, and thus a wide area was investigated. At that time, marine geophysics surveying for engineering projects was in its infancy, with poor positioning and resolution from seismic profiling. The 1964–1965 surveys concentrated on a northerly route that left the English coast at Dover harbour; using 70 boreholes, an area of deeply weathered rock with high permeability was located just south of Dover harbour.
Given the previous survey results and access constraints, a more southerly route was investigated in the 1972–1973 survey, and the route was confirmed to be feasible. Information for the tunnelling project also came from work before the 1975 cancellation. On the French side at Sangatte, a deep shaft with adits was made. On the English side at Shakespeare Cliff, the government allowed of diameter tunnel to be driven. The actual tunnel alignment, method of excavation and support were essentially the same as the 1975 attempt. In the 1986–1987 survey, previous findings were reinforced, and the characteristics of the gault clay and the tunnelling medium (chalk marl that made up 85% of the route) were investigated. Geophysical techniques from the oil industry were employed.
Tunnelling
Tunnelling was a major engineering challenge; the only precedent was the undersea Seikan Tunnel in Japan, which opened in 1988. A serious health and safety risk with building tunnels under water is major water inflow due to the high hydrostatic pressure from the sea above, under weak ground conditions. The tunnel also had the challenge of timescale: being privately funded, an early financial return was paramount.
The objective was to construct two rail tunnels, apart, in length; a service tunnel between the two main ones; pairs of -diameter cross-passages linking the rail tunnels to the service tunnel at spacing; piston relief ducts in diameter connecting the rail tunnels apart; two undersea crossover caverns to connect the rail tunnels, with the service tunnel always preceding the main ones by at least to ascertain the ground conditions. There was plenty of experience with excavating through chalk in the mining industry, while the undersea crossover caverns were a complex engineering problem. The French one was based on the Mount Baker Ridge freeway tunnel in Seattle; the UK cavern was dug from the service tunnel ahead of the main ones, to avoid delay.
Precast segmental linings in the main tunnel boring machine (TBM) drives were used, but two different solutions were used. On the French side, neoprene and grout sealed bolted linings made of cast iron or high-strength reinforced concrete were used; on the English side, the main requirement was for speed, so bolting of cast-iron lining segments was only done in areas of poor geology. In the UK rail tunnels, eight lining segments plus a key segment were used; in the French side, five segments plus a key. On the French side, a diameter deep grout-curtained shaft at Sangatte was used for access. On the English side, a marshalling area was below the top of Shakespeare Cliff, the New Austrian Tunnelling method (NATM) was first applied in the chalk marl here. On the English side, the land tunnels were driven from Shakespeare Cliff—the same place as the marine tunnels—not from Folkestone. The platform at the base of the cliff was not large enough for all of the drives and, despite environmental objections, tunnel spoil was placed behind a reinforced concrete seawall, on condition of placing the chalk in an enclosed lagoon, to avoid wide dispersal of chalk fines. Owing to limited space, the precast lining factory was on the Isle of Grain in the Thames estuary, which used Scottish granite aggregate delivered by ship from the Foster Yeoman coastal super quarry at Glensanda in Loch Linnhe on the west coast of Scotland.
On the French side, owing to the greater permeability to water, earth pressure balance TBMs with open and closed modes were used. The TBMs were used in the closed mode for the first , but then operated as open, boring through the chalk marl stratum. This minimised the impact to the ground, allowed high water pressures to be withstood and also alleviated the need to grout ahead of the tunnel. The French effort required five TBMs: two main marine machines, one mainland machine (the short land drives of allowed one TBM to complete the first drive then reverse direction and complete the other), and two service tunnel machines.
On the English side, the simpler geology allowed faster open-faced TBMs. Six machines were used; all commenced digging from Shakespeare Cliff, three marine-bound and three for the land tunnels. Towards the completion of the undersea drives, the UK TBMs were driven steeply downwards and buried clear of the tunnel. These buried TBMs were then used to provide an electrical earth. The French TBMs then completed the tunnel and were dismantled. A gauge railway was used on the English side during construction.
In contrast to the English machines, which were given technical names, the French tunnelling machines were all named after women: Brigitte, Europa, Catherine, Virginie, Pascaline, Séverine.
After the tunnelling, one machine was on display at the side of the M20 motorway in Folkestone until Eurotunnel sold it on eBay for £39,999 to a scrap metal merchant. Another machine (T4 "Virginie") still survives on the French side, adjacent to Junction 41 on the A16, in the middle of the D243E3/D243E4 roundabout. On it are the words "hommage aux bâtisseurs du tunnel", meaning "tribute to the builders of the tunnel".
Tunnel boring machines
The eleven tunnel boring machines were designed and manufactured through a joint venture between the Robbins Company of Kent, Washington, United States; Markham & Co. of Chesterfield, England; and Kawasaki Heavy Industries of Japan. The TBMs for the service tunnels and main tunnels on the UK side were designed and manufactured by James Howden & Company Ltd, Scotland.
Railway design
Loading gauge
The loading gauge height is .
Communications
There are three communication systems:
Concession radio – for the tunnel operator's personnel and vehicles within the concession area (terminals, tunnels, coastal shafts)
Track-to-train radio – secure speech and data between trains and the railway control centre
Shuttle internal radio – communication among shuttle crew, and to passengers over car radios
Power supply
Power is delivered to the locomotives via an overhead line at with a normal overhead clearance of . All tunnel services run on electricity, shared equally from English and French sources. There are two substations fed at 400 kV at each terminal, but in an emergency, the tunnel's lighting (about 20,000 light fittings) and the plant can be powered solely from either England or France.
The traditional railway south of London uses a 750 V DC third rail to deliver electricity, but since the opening of High Speed 1 there is no longer any need for tunnel trains to use it. High Speed 1, the tunnel and the LGV Nord all have power provided via overhead catenary at 25 kV 50 Hz AC. The railways on "classic" lines in Belgium are also electrified by overhead wires, but at 3,000 V DC.
Signalling
A cab signalling system gives information directly to train drivers on a display. There is a train protection system that stops the train if the speed exceeds that indicated on the in-cab display. TVM430, as used on LGV Nord and High Speed 1, is used in the tunnel. The TVM signalling is interconnected with the signalling on the high-speed lines on either side, allowing trains to enter and exit the tunnel system without stopping. The maximum speed is .
Signalling in the tunnel is coordinated from a control centre at the Folkestone terminal. A backup facility at the Calais terminal is staffed at all times and can take over all operations in the event of a breakdown or emergency.
Track system
Conventional ballasted tunnel track was ruled out owing to the difficulty of maintenance and lack of stability and precision. The Sonneville International Corporation's track system was chosen because it was reliable and also cost-effective. The type of track used is known as Low Vibration Track (LVT), which is held in place by gravity and friction. Reinforced concrete blocks of support the rails every and are held by thick closed-cell polymer foam pads placed at the bottom of rubber boots. The latter separates the blocks' mass movements from the concrete. The track provides extra overhead clearance for larger trains. UIC60 (60 kg/m) rails of 900A grade rest on rail pads, which fit the RN/Sonneville bolted dual leaf-springs. The rails, LVT-blocks and their boots with pads were assembled outside the tunnel, in a fully automated process developed by the LVT inventor, Roger Sonneville. About 334,000 Sonneville blocks were made on the Sangatte site.
Maintenance activities are less than projected. The rails had initially been ground on a yearly basis or after approximately 100MGT of traffic. Maintenance is facilitated by the existence of two tunnel junctions or crossover facilities, allowing for two-way operation in each of the six tunnel segments, and providing safe access for maintenance of one isolated tunnel segment at a time. The two crossovers are the largest artificial undersea caverns ever built, at long, high and wide. The English crossover is from Shakespeare Cliff, and the French crossover is from Sangatte.
Ventilation, cooling and drainage
The ventilation system maintains greater air pressure in the service tunnel than in the rail tunnels, so that in the event of a fire, smoke does not enter the service tunnel from the rail tunnels. There is a normal ventilating system and a supplementary system. Twin fans are mounted in vertical shafts where digging for the tunnel began, on both sides of the channel: two in Sangatte, France, and two more at Shakespeare Cliff, UK. The normal ventilating system is connected direct to the service tunnel and provides fresh air through the cross- passages into the running tunnels, where it is dispersed by the piston effect of the train and shuttle movements. Only one fan on each side is ever running, the second being available as a backup. The supplementary ventilating system is a separate emergency system and can be used to control smoke or supply emergency air within the tunnels. On both systems, the fans are normally run on supply mode, pulling in air from the outside, but they can also be used in extraction mode to remove smoke or fumes from the tunnels.
Two cooling water pipes in each rail tunnel circulate chilled water to remove heat generated by the rail traffic. Pumping stations remove water in the tunnels from rain, seepage, and so on.
During the design stage of the tunnel, engineers found that its aerodynamic properties and the heat generated by high-speed trains as they passed through it would raise the temperature inside the tunnel to . As well as making the trains "unbearably warm" for passengers, this also presented a risk of equipment failure and track distortion. To cool the tunnel to below , engineers installed of diameter cooling pipes carrying of water. The network—Europe's largest cooling system—was supplied by eight York Titan chillers running on R22, a hydrochlorofluorocarbon (HCFC) refrigerant gas.
Due to R22's ozone depletion potential and high global warming potential, its use is being phased out in developed countries. Since 1 January 2015, it has been illegal in Europe to use HCFCs to service air-conditioning equipment; broken equipment that used HCFCs must be replaced with equipment that does not use it. In 2016, Trane was selected to provide replacement chillers for the tunnel's cooling network. The York chillers were decommissioned and four "next generation" Trane Series E CenTraVac large-capacity (2,600 kW to 14,000 kW) chillers were installed—two in Sangatte, France, and two at Shakespeare Cliff, UK. The energy-efficient chillers, using Honeywell's non-flammable, ultra-low GWP R1233zd(E) refrigerant, maintain temperatures at , and in their first year of operation generated savings of 4.8GWh—approximately 33%, equating to €500,000 ($585,000)—for tunnel operator Getlink.
Rolling stock
Rolling stock used previously
Operators
LeShuttle
Getlink operates the LeShuttle, a vehicle shuttle service, through the tunnel.
Car shuttle sets have two separate halves: single and double deck. Each half has two loading/unloading wagons and 12 carrier wagons. Eurotunnel's original order was for nine car shuttle sets.
Heavy goods vehicle (HGV) shuttle sets also have two halves, with each half containing one loading wagon, one unloading wagon and 14 carrier wagons. There is a club car behind the leading locomotive, where drivers must stay during the journey. Eurotunnel originally ordered six HGV shuttle sets.
Initially 38 LeShuttle locomotives were commissioned, with one at each end of a shuttle train.
Freight locomotives
Forty-six Class 92 locomotives for hauling freight trains and overnight passenger trains (the Nightstar project, which was abandoned) were commissioned, running on both overhead AC and third-rail DC power. However, RFF does not let these run on French railways, so there are plans to certify Alstom Prima II locomotives for use in the tunnel.
International passenger
Thirty-one Eurostar trains, based on the French TGV, built to UK loading gauge with many modifications for safety within the tunnel, were commissioned, with ownership split between British Rail, French national railways (SNCF) and Belgian national railways (NMBS/SNCB). British Rail ordered seven more for services north of London. Around 2010, Eurostar ordered ten trains from Siemens based on its Velaro product. The Class 374 entered service in 2016 and has been operating through the Channel Tunnel ever since alongside the current Class 373.
Germany (DB) tried from about 2005 to get permission to run train services to London. At the end of 2009, extensive fire-proofing requirements were dropped and DB received permission to run German Intercity-Express (ICE) test trains through the tunnel. In June 2013 DB was granted access to the tunnel, but these plans were ultimately terminated.
In October 2021, Renfe, the Spanish state railway company, expressed interest in operating a cross-Channel route between Paris and London using some of their existing trains with the intention of competing with Eurostar. No details have been revealed as to which trains would be used.
Between October and November 2023, three more companies expressed interest in potentially running services between London and various European cities:
"Evolyn", a start-up company based in Spain announced plans that they intended to run services between London and Paris by 2026. The company stated that orders had been placed for the newly developed "Avelia" high speed trains built by Alstom for international operations. Alstom however, noted that no firm order for any rolling stock had been placed, but that there were ongoing discussions with the start-up over potential procurements.
Virgin Group founder Richard Branson had reportly hired the former managing director of Virgin Trains to initiate infrastructure talks on a potential international service to rival Eurostar running services between London, Paris, Brussels and Amsterdam.
Dutch start-up "Heuro" announced plans to start running services from Amsterdam to both Paris and London. Heuro is said to have officially applied for timetable slots beginning in December 2027 and is reportedly raising investment funds in Europe and the USA.
Service locomotives
Diesel locomotives for rescue and shunting work are Eurotunnel Class 0001 and Eurotunnel Class 0031.
Operation
The following chart presents the estimated number of passengers and tonnes of freight, respectively, annually transported through the Channel Tunnel since 1994 (M = million).
Usage and services
Transport services offered by the tunnel are as follows:
Eurotunnel Le Shuttle roll-on roll-off shuttle service for road vehicles and their drivers and passengers,
Eurostar passenger trains,
through freight trains.
Both the freight and passenger traffic forecasts made before the construction of the tunnel were overestimated; in particular, Eurotunnel's commissioned forecasts were over-predictions. Although the captured share of Channel crossings was forecast correctly, high competition (especially from budget airlines which expanded rapidly in the 1990s and 2000s) and reduced tariffs led to low revenue. Overall cross-Channel traffic was overestimated.
With the European Union's liberalisation of international rail services, the tunnel and High Speed 1 have been open to competition since 2010. There have been a number of operators interested in running trains through the tunnel and along High Speed 1 to London. In June 2013, after several years, Deutsche Bahn obtained a license to operate Frankfurt – London trains, not expected to run before 2016 because of delivery delays of the custom-made trains.
Plans for the service to Frankfurt seem to have been shelved in 2018.
Passenger traffic volumes
Cross-tunnel passenger traffic volumes peaked at 18.4 million in 1998, decreased to 14.9 million in 2003, and have increased substantially since then.
At the time of the decision about building the tunnel, 15.9 million passengers were predicted for Eurostar trains during the first year. In 1995, the first full year, actual numbers were a little over 2.9 million, growing to 7.1 million in 2000, then decreasing to 6.3 million in 2003. Eurostar was initially limited by the lack of a high-speed connection on the British side. After the completion of High Speed 1 in two stages in 2003 and 2007, traffic increased. In 2008, Eurostar carried 9,113,371 passengers, a 10% increase over the previous year, despite traffic limitations due to the 2008 Channel Tunnel fire. Eurostar passenger numbers continued to increase.
Freight traffic volumes
Freight volumes have been erratic, with a major decrease during 1997 due to a closure caused by a fire in a freight shuttle. Freight crossings increased over the period, indicating the substitutability of the tunnel by sea crossings. The tunnel has achieved a market share close to or above Eurotunnel's 1980s predictions but Eurotunnel's 1990 and 1994 predictions were overestimates.
For through freight trains, the first year prediction was 7.2 million tonnes; the actual 1995 figure was 1.3 m tonnes. Through freight volumes peaked in 1998 at 3.1 m tonnes. This fell back to 1.21 m tonnes in 2007, increasing slightly to 1.24 m tonnes in 2008. Together with that carried on freight shuttles, freight volumes have grown since opening, with 6.4 m tonnes carried in 1995, 18.4 m tonnes recorded in 2003 and 19.6 m tonnes in 2007. Numbers fell back in the wake of the 2008 fire.
Eurotunnel's freight subsidiary is Europorte 2. In September 2006 EWS, the UK's largest rail freight operator, announced that owing to the cessation of UK-French government subsidies of £52 million per annum to cover the tunnel "Minimum User Charge" (a subsidy of around £13,000 per train, at a traffic level of 4,000 trains per annum), freight trains would stop running after 30 November.
Economic performance
Shares in Eurotunnel were issued at £3.50 per share on 9 December 1987. By mid-1989 their price had risen to £11.00. Delays and cost overruns resulted in the price falling; during demonstration runs in October 1994, it reached an all-time low. Eurotunnel suspended payment on its debt in September 1995 to avoid bankruptcy. In December 1997 the British and French governments extended Eurotunnel's operating concession by 34 years, to 2086. There was a financial restructuring of Eurotunnel in mid-1998, reducing debt and financial charges. Despite this, The Economist reported in 1998 that to break even Eurotunnel would have to increase fares, traffic and market share for sustainability. A cost-benefit analysis of the tunnel indicated that there were few effects on the wider economy and few developments associated with the project and that the British economy would have been better off if it had not been constructed.
Under the terms of the Concession, Eurotunnel was obliged to investigate a cross-Channel road tunnel. In December 1999 road and rail tunnel proposals were presented to the British and French governments, but it was stressed that there was not enough demand for a second tunnel. A three-way treaty between the United Kingdom, France and Belgium governs border controls, with the establishment of control zones within which the officers of the other nation may exercise limited customs and law enforcement powers. For most purposes, these are at either end of the tunnel, with the French border controls on the UK side of the tunnel and vice versa. For some city-to-city trains, the train is a control zone. A binational emergency plan coordinates UK and French emergency activities.
In 1999 Eurostar posted its first net profit, having made a loss of £925m in 1995. In 2005 Eurotunnel was described as being in a serious situation. In 2013, operating profits rose 4percent from 2012, to £54million.
Security
There is a need for full passport controls, as the tunnel acts as a border between the Schengen Area and the Common Travel Area. There are juxtaposed controls, meaning that passports are checked before boarding by officials of the departing country and by officials of the destination country. These control points are only at the main Eurostar stations: French officials operate at London St Pancras, while British officials operate at Lille-Europe, Brussels-South, Paris-Gare du Nord, Rotterdam CS, and Amsterdam CS. During the winter ski season, they also operate at Gare de Bourg-Saint-Maurice and Moûtiers-Salins-Brides-les-Bains station. Eurostar passengers pass through airport-style security screening. For the shuttle road-vehicle trains, there are juxtaposed passport controls before boarding the trains.
When Eurostar trains ran south of Paris such as from Marseille, there were no passport and security checks before departure, and those trains had to stop in Lille at least 30 minutes to allow all passengers to be checked. No checks are performed on board. There have been plans for services from Amsterdam, Frankfurt and Cologne to London, but a major reason to cancel them was the need for a stop in Lille. Direct service from London to Amsterdam started on 4 April 2018; following the building of check-in terminals at Amsterdam and Rotterdam and the intergovernmental agreement, a direct service from the two Dutch cities to London started on 30 April 2020.
Terminals
The terminals' sites are at Cheriton (near Folkestone in the United Kingdom) and Coquelles (near Calais in France). The UK site uses the M20 motorway for access. The terminals are organised with the frontier controls juxtaposed with the entry to the system to allow travellers to go onto the motorway at the destination country immediately after leaving the shuttle.
To achieve design output at the French terminal, the shuttles accept cars on double-deck wagons; for flexibility, ramps were placed inside the shuttles to provide access to the top decks. At Folkestone there are of the main-line track, 45 turnouts and eight platforms. At Calais there are of track and 44 turnouts. At the terminals, the shuttle trains traverse a figure eight to reduce uneven wear on the wheels. There is a freight marshalling yard west of Cheriton at Dollands Moor Freight Yard.
Regional effect
A 1996 report from the European Commission predicted that Kent and Nord-Pas de Calais would have increased traffic volumes due to the general growth of cross-Channel traffic and traffic attracted by the tunnel. In Kent, a high-speed rail line to London would transfer traffic from road to rail. Kent's regional development would benefit from the tunnel, but being so close to London restricts the benefits. Gains are in the traditional industries and are largely dependent on the development of Ashford International railway station, without which Kent would be dependent totally on London's expansion. Nord-Pas-de-Calais enjoys a strong internal symbolic effect of the Tunnel which results in significant gains in manufacturing.
The removal of a bottleneck by means like the tunnel does not necessarily induce economic gains in all adjacent regions. The image of a region being connected to European high-speed transport and active political response is more important for regional economic development. Some small-medium enterprises located in the immediate vicinity of the terminal have used the opportunity to re-brand the profile of their business with positive effects, such as The New Inn at Etchinghill which was able to commercially exploit its unique selling point as being 'the closest pub to the Channel Tunnel'. Tunnel-induced regional development is small compared to general economic growth. The South East of England is likely to benefit developmentally and socially from faster and cheaper transport to continental Europe, but the benefits are unlikely to be distributed equally throughout the region. The overall environmental effect is almost certainly negative.
Since the opening of the tunnel, small positive effects on the wider economy have been felt, but it is difficult to identify major economic successes attributed directly to the tunnel. The Eurotunnel does operate profitably, offering an alternative transportation mode unaffected by poor weather. High costs of construction did delay profitability, however, and companies involved in the tunnel's construction and operation early in operation relied on government aid to deal with the accumulated debt.
Illegal immigration
Illegal immigrants and would-be asylum seekers have used the tunnel to attempt to enter Britain. By 1997, the problem had attracted international press attention, and by 1999, the French Red Cross opened the first migrant centre at Sangatte, using a warehouse once used for tunnel construction; by 2002, it housed up to 1,500 people at a time, most of them trying to get to the UK. In 2001, most came from Afghanistan, Iraq, and Iran, but African countries were also represented.
Eurotunnel, the company that operates the crossing, said that more than 37,000 migrants were intercepted between January and July 2015. Approximately 3,000 migrants, mainly from Ethiopia, Eritrea, Sudan and Afghanistan, were living in the temporary camps erected in Calais at the time of an official count in July 2015. An estimated 3,000 to 5,000 migrants were waiting in Calais for a chance to get to England.
Britain and France operate a system of juxtaposed controls on immigration and customs, where investigations happen before travel. France is part of the Schengen immigration zone, removing border checks in normal times between most EU member states; Britain and Ireland form their own separate Common Travel Area immigration zone.
Most illegal immigrants and would-be asylum seekers who got into Britain found some way to ride a freight train. Trucks are loaded onto freight trains. In a few instances, migrants stowed away in a liquid chocolate tanker and managed to survive, spread across several attempts. Although the facilities were fenced, total security was deemed impossible; migrants would even jump from bridges onto moving trains. In several incidents people were injured during the crossing; others tampered with railway equipment, causing delays and requiring repairs. Eurotunnel said it was losing £5m per month because of the problem.
In 2001 and 2002, several riots broke out at Sangatte, and groups of migrants (as many as 550 in a December 2001 incident) stormed the fences and attempted to enter en masse.
Other migrants seeking permanent UK settlement use the Eurostar passenger train. They may purport to be visitors (whether to be issued with a required visit visa, or deny and falsify their true intentions to obtain a maximum of 6-months-in-a-year at-port stamp); purport to be someone else whose documents they hold, or used forged or counterfeit passports. Such breaches result in refusal of permission to enter the UK, effected by Border Force after such a person's identity is fully established, assuming they persist in their application to enter the UK.
Increased security measures around the tunnel have resulted in much of the migration moving to small boats instead.
Diplomatic efforts
Local authorities in both France and the UK called for the closure of the Sangatte migrant camp, and Eurotunnel twice sought an injunction against it. As of 2006 the United Kingdom blamed France for allowing Sangatte to open, and France blamed both the UK for its then lax asylum rules/law, and the EU for not having a uniform immigration policy. The problem's cause célèbre nature even resulted in journalists being detained as they followed migrants onto railway property.
In 2002, the European Commission told France that it was in breach of European Union rules on the free transfer of goods because of the delays and closures as a result of its poor security. The French government built a double fence, at a cost of £5 million, reducing the numbers of migrants detected each week reaching Britain on goods trains from 250 to almost none. Other measures included CCTV cameras and increased police patrols. At the end of 2002, the Sangatte centre was closed after the UK agreed to absorb some migrants.
On 23 and 30 June 2015, striking workers associated with MyFerryLink damaged sections of track by burning car tires, cancelling all trains and creating a backlog of vehicles. Hundreds seeking to reach Britain attempted to stow away inside and underneath transport trucks destined for the UK. Extra security measures included a £2million upgrade of detection technology, £1million extra for dog searches, and £12million (over three years) towards a joint fund with France for security surrounding the Port of Calais.
Illegal attempts to cross and deaths
In 2002, a dozen migrants died in crossing attempts. In the two months from June to July 2015, ten migrants died near the French tunnel terminal, during a period when 1,500 attempts to evade security precautions were being made each day.
On 6 July 2015, a migrant died while attempting to climb onto a freight train while trying to reach Britain from the French side of the Channel. The previous month an Eritrean man was killed under similar circumstances.
During the night of 28 July 2015, one person, aged 25–30, was found dead after a night in which 1,500–2,000 migrants had attempted to enter the Eurotunnel terminal. The body of a Sudanese migrant was subsequently found inside the tunnel. On 4 August 2015, another Sudanese migrant walked nearly the entire length of one of the tunnels. He was arrested close to the British side, after having walked about through the tunnel.
Mechanical incidents
Fires
There have been three fires in the tunnel, all on the heavy goods vehicle (HGV) shuttles, that were significant enough to close the tunnel, as well as other minor incidents.
On 9 December 1994, during an "invitation only" testing phase, a fire broke out in a Ford Escort car while its owner was loading it onto the upper deck of a tourist shuttle. The fire started at about 10:00, with the shuttle train stationary in the Folkestone terminal, and was put out about 40 minutes later with no passenger injuries.
On 18 November 1996, a fire broke out on an HGV shuttle wagon in the tunnel, but nobody was hurt seriously. The exact cause is unknown, although it was neither a Eurotunnel equipment nor rolling stock problem; it may have been due to arson of a heavy goods vehicle. It is estimated that the heart of the fire reached , with the tunnel severely damaged over , with some affected to some extent. Full operation recommenced six months after the fire.
On 21 August 2006, the tunnel was closed for several hours when a truck on an HGV shuttle train caught fire.
On 11 September 2008, a fire occurred in the Channel Tunnel at 13:57 GMT. The incident started on an HGV shuttle train travelling towards France. The event occurred from the French entrance to the tunnel. No one was killed but several people were taken to hospitals suffering from smoke inhalation, and minor cuts and bruises. The tunnel was closed to all traffic, with the undamaged South Tunnel reopening for limited services two days later. Full service resumed on 9 February 2009 after repairs costing €60 million.
On 29 November 2012, the tunnel was closed for several hours after a truck on an HGV shuttle caught fire.
On 17 January 2015, both tunnels were closed after a lorry fire that filled the midsection of Running Tunnel North with smoke. Eurostar cancelled all services. The shuttle train had been heading from Folkestone to Coquelles and stopped adjacent to cross-passage CP 4418 just before 12:30 UTC. 38 passengers and four members of Eurotunnel staff were evacuated into the service tunnel and transported to France in special STTS road vehicles. They were taken to the Eurotunnel Fire/Emergency Management Centre close to the French portal.
Train failures
On the night of 19/20 February 1996, about 1,000 passengers became trapped in the Channel Tunnel when Eurostar trains from London broke down owing to failures of electronic circuits caused by snow and ice being deposited and then melting on the circuit boards.
On 3 August 2007, an electrical failure lasting six hours caused passengers to be trapped in the tunnel on a shuttle.
On the evening of 18 December 2009, during the December 2009 European snowfall, five London-bound Eurostar trains failed inside the tunnel, trapping 2,000 passengers for approximately 16 hours, during the coldest temperatures in eight years. A Eurotunnel spokesperson explained that snow had evaded the train's winterisation shields, and the transition from cold air outside to the tunnel's warm atmosphere had melted the snow, resulting in electrical failures. One train was turned back before reaching the tunnel; two trains were hauled out of the tunnel by Eurotunnel Class 0001 diesel locomotives. The blocking of the tunnel led to the implementation of Operation Stack, the transformation of the M20 motorway into a linear car park.
The occasion was the first time that a Eurostar train was evacuated inside the tunnel; the failing of four at once was described as "unprecedented". The Channel Tunnel reopened the following morning. Nirj Deva, Member of the European Parliament for South East England, had called for Eurostar chief executive Richard Brown to resign over the incidents. An independent report by Christopher Garnett (former CEO of Great North Eastern Railway) and Claude Gressier (a French transport expert) on the 18/19 December 2009 incidents was issued in February 2010, making 21 recommendations.
On 7 January 2010, a Brussels–London Eurostar broke down in the tunnel. The train had 236 passengers on board and was towed to Ashford; other trains that had not yet reached the tunnel were turned back.
Safety
The Channel Tunnel Safety Authority is responsible for some aspects of safety regulation in the tunnel; it reports to the Intergovernmental Commission (IGC).
The service tunnel is used for access to technical equipment in cross-passages and equipment rooms, to provide fresh-air ventilation and for emergency evacuation. The Service Tunnel Transport System (STTS) allows fast access to all areas of the tunnel. The service vehicles are rubber-tired with a buried wire guidance system.
The 24 STTS vehicles are used mainly for maintenance but also for firefighting and emergencies. "Pods" with different purposes, up to a payload of , are inserted into the side of the vehicles. The vehicles cannot turn around within the tunnel and are driven from either end. The maximum speed is when the steering is locked. A fleet of 15 Light Service Tunnel Vehicles (LADOGS) was introduced to supplement the STTSs. The LADOGS has a short wheelbase with a turning circle, allowing two-point turns within the service tunnel. Steering cannot be locked like the STTS vehicles, and maximum speed is . Pods up to can be loaded onto the rear of the vehicles. Drivers in the tunnel sit on the right, and the vehicles drive on the left. Owing to the risk of French personnel driving on their native right side of the road, sensors in the vehicles alert the driver if the vehicle strays to the right side.
The three tunnels contain of air that needs to be conditioned for comfort and safety. Air is supplied from ventilation buildings at Shakespeare Cliff and Sangatte, with each building capable of providing 100% standby capacity. Supplementary ventilation also exists on either side of the tunnel. In the event of a fire, ventilation is used to keep smoke out of the service tunnel and move smoke in one direction in the main tunnel to give passengers clean air. The tunnel was the first main-line railway tunnel to have special cooling equipment. Heat is generated from traction equipment and drag. The design limit was set at , using a mechanical cooling system with refrigeration plants on both sides that run chilled water circulating in pipes within the tunnel.
Trains travelling at high speed create piston effect pressure changes that can affect passenger comfort, ventilation systems, tunnel doors, fans and the structure of the trains, and which drag on the trains. Piston relief ducts of diameter were chosen to solve the problem, with 4 ducts per kilometre to give close to optimum results. However, this design led to extreme lateral forces on the trains, so a reduction in train speed was required and restrictors were installed in the ducts.
The safety issue of a possible fire on a passenger-vehicle shuttle garnered much attention, with Eurotunnel noting that fire was the risk attracting the most attention in a 1994 safety case for three reasons: the opposition of ferry companies to passengers being allowed to remain with their cars; Home Office statistics indicating that car fires had doubled in ten years; and the long length of the tunnel. Eurotunnel commissioned the UK Fire Research Station—now part of the Building Research Establishment—to give reports of vehicle fires, and liaised with Kent Fire Brigade to gather vehicle fire statistics over one year. Fire tests took place at the French Mines Research Establishment with a mock wagon used to investigate how cars burned. The wagon door systems are designed to withstand fire inside the wagon for 30 minutes, longer than the transit time of 27 minutes. Wagon air conditioning units help to purge dangerous fumes from inside the wagon before travel. Each wagon has a fire detection and extinguishing system, with sensing of ions or ultraviolet radiation, smoke and gases that can trigger halon gas to quench a fire.
Since the HGV wagons are not covered, fire sensors are located on the loading wagon and in the tunnel. A water main in the service tunnel provides water to the main tunnels at intervals. The ventilation system can control smoke movement. Special arrival sidings accept a train that is on fire, as the train is not allowed to stop whilst on fire in the tunnel unless continuing its journey would lead to a worse outcome. Two STTS (Service Tunnel Transportation System) vehicles with firefighting pods are on duty at all times, with a maximum delay of 10 minutes before they reach a burning train.
Eurotunnel has banned a wide range of hazardous goods from travelling in the tunnel.
Unusual traffic
Trains
In 1999, the Kosovo Train for Life passed through the tunnel en route to Pristina, in Kosovo.
Other
In 2009, former F1 racing champion John Surtees drove a Ginetta G50 EV electric sports car prototype from England to France, using the service tunnel, as part of a charity event. He was required to keep to the speed limit. To celebrate the 2014 Tour de France's transfer from its opening stages in Britain to France in July of that year, Chris Froome of Team Sky rode a bicycle through the service tunnel, becoming the first solo rider to do so. The crossing took under an hour, reaching speeds of —faster than most cross-channel ferries.
Mobile network coverage
Since 2012, French operators Bouygues Telecom, Orange and SFR have covered Running Tunnel South, the tunnel bore normally used for travel from France to Britain.
In January 2014, UK operators EE and Vodafone signed ten-year contracts with Eurotunnel for Running Tunnel North. The agreements will enable both operators' subscribers to use 2G and 3G services. Both EE and Vodafone planned to offer LTE services on the route; EE said it expected to cover the route with LTE connectivity by the summer of 2014. EE and Vodafone will offer Channel Tunnel network coverage for travellers from the UK to France. Eurotunnel said it also held talks with Three UK but had yet to reach an agreement with the operator.
In May 2014, Eurotunnel announced that they had installed equipment from Alcatel-Lucent to cover Running Tunnel North and simultaneously to provide mobile service (GSM 900/1800 MHz and UMTS 2100 MHz) by EE, O2 and Vodafone. The service of EE and Vodafone commenced on the same date as the announcement. O2 service was expected to be available soon afterwards.
In November 2014, EE announced that it had previously switched on LTE earlier in September 2014. O2 turned on 2G, 3G and 4G services in November 2014, whilst Vodafone's 4G was due to go live later.
Other (non-transport) services
The tunnel also houses the 1,000 MW ElecLink interconnector to transfer power between the British and French electricity networks. During the night of 31 August/1 September 2021, the 51 km-long 320 kV DC cable was switched into service for the first time.
| Technology | Transport infrastructure | null |
5705 | https://en.wikipedia.org/wiki/Continuum%20hypothesis | Continuum hypothesis | In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states:
Or equivalently:
In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: .
The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
The name of the hypothesis comes from the term the continuum for the real numbers.
History
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated.
Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen.
Cardinality of infinite sets
Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}.
With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets.
Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question.
The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, S, of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into S. As the real numbers are equinumerous with the powerset of the integers, i.e. , the continuum hypothesis can be restated as follows:
Assuming the axiom of choice, there is a unique smallest cardinal number greater than , and the continuum hypothesis is in turn equivalent to the equality .
Independence from ZFC
The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen.
Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories.
Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof.
The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known large cardinal axioms in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if is a cardinal of uncountable cofinality, then there is a forcing extension in which . However, per König's theorem, it is not consistent to assume is or or any cardinal with cofinality .
The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well.
The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status.
The continuum hypothesis and the axiom of choice were among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuming good soundness properties and the consistency ZFC, Gödel's incompleteness theorems, which were published in 1931, establish that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories.
Arguments for and against the continuum hypothesis
Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that the Zermelo–Fraenkel axioms do not adequately characterize the universe of sets. Gödel was a Platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though a formalist, also tended towards rejecting CH.
Historically, mathematicians who favored a "rich" and "large" universe of sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH.
Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.
At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is "intuitively clear" but others have disagreed.
A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the , or "Star axiom". The Star axiom would imply that is , thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture.
Solomon Feferman argued that CH is not a definite mathematical problem. He proposed a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggested that a proposition is mathematically "definite" if the semi-intuitionistic theory can prove . He conjectured that CH is not definite according to this notion, and proposed that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article.
Joel David Hamkins proposes a multiverse approach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for". In a related vein, Saharon Shelah wrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC".
Generalized continuum hypothesis
The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set and that of the power set of , then it has the same cardinality as either or . That is, for any infinite cardinal there is no cardinal such that . GCH is equivalent to:
(occasionally called Cantor's aleph hypothesis).
The beth numbers provide an alternative notation for this condition: for every ordinal . The continuum hypothesis is the special case for the ordinal . GCH was first suggested by Philip Jourdain. For the early history of GCH, see Moore.
Like CH, GCH is also independent of ZFC, but Sierpiński proved that ZF + GCH implies the axiom of choice (AC) (and therefore the negation of the axiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than some aleph number, and thus can be ordered. This is done by showing that n is smaller than which is smaller than its own Hartogs number—this uses the equality ; for the full proof, see Gillman.
Kurt Gödel showed that GCH is a consequence of ZF + V=L (the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to prove Easton's theorem, which shows it is consistent with ZFC for arbitrarily large cardinals to fail to satisfy . Much later, Foreman and Woodin proved that (assuming the consistency of very large cardinals) it is consistent that holds for every infinite cardinal . Later Woodin extended this by showing the consistency of for every Carmi Merimovich showed that, for each , it is consistent with ZFC that for each infinite cardinal , is the th successor of (assuming the consistency of some large cardinal axioms). On the other hand, László Patai proved that if is an ordinal and for each infinite cardinal , is the th successor of , then is finite.
For any infinite sets and , if there is an injection from to then there is an injection from subsets of to subsets of . Thus for any infinite cardinals and , . If and are finite, the stronger inequality holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals.
Implications of GCH for cardinal exponentiation
Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation in all cases. GCH implies that for ordinals and :
when ;
when and , where cf is the cofinality operation; and
when and
The first equality (when ) follows from:
while:
The third equality (when and ) follows from:
by König's theorem, while:
| Mathematics | Set theory | null |
5715 | https://en.wikipedia.org/wiki/Cryptanalysis | Cryptanalysis | Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown.
In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation.
Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization.
Overview
In encryption, confidential information (called the "plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext.
Encryption has been used throughout history to send important military, diplomatic and commercial messages, and today is very widely used in computer networking to protect email and internet communication.
The goal of cryptanalysis is for a third party, a cryptanalyst, to gain as much information as possible about the original ("plaintext"), attempting to "break" the encryption to read the ciphertext and learning the secret key so future messages can be decrypted and read. A mathematical technique to do this is called a cryptographic attack. Cryptographic attacks can be characterized in a number of ways:
Amount of information available to the attacker
Cryptanalytical attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim "the enemy knows the system" – in its turn, equivalent to Kerckhoffs's principle. This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes):
Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts.
Known-plaintext: the attacker has a set of ciphertexts to which they know the corresponding plaintext.
Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of their own choosing.
Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions, similarly to the Adaptive chosen ciphertext attack.
Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit.
Computational resources required
Attacks can also be characterised by the resources they require. Those resources include:
Time – the number of computation steps (e.g., test encryptions) which must be performed.
Memory – the amount of storage required to perform the attack.
Data – the quantity and type of plaintexts and ciphertexts required for a particular approach.
It is sometimes difficult to predict these quantities precisely, especially when the attack is not practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 252."
Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised."
Partial breaks
The results of cryptanalysis can also vary in usefulness. Cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered:
Total break – the attacker deduces the secret key.
Global deduction – the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key.
Instance (local) deduction – the attacker discovers additional plaintexts (or ciphertexts) not previously known.
Information deduction – the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known.
Distinguishing algorithm – the attacker can distinguish the cipher from a random permutation.
Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions.
In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system.
History
Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis.
Classical ciphers
Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods.
The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains.
Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.
In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis.
Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes.
In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system.
Ciphers from World War I and World War II
In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence.
Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended.
In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program.
Indicator
With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message.
Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine.
Depth
Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth." This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message.
Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ):
Plaintext ⊕ Key = Ciphertext
Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext:
Ciphertext ⊕ Key = Plaintext
(In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts:
Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2
The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component:
(Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2
The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed:
Plaintext1 ⊕ Ciphertext1 = Key
Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them.
Development of modern cryptography
Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today.
Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes:
Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field."
However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography:
The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998.
FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical.
The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment.
Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit "export-strength" cryptography, and the DVD Content Scrambling System.
In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access.
In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hash functions. The certificate issuers involved changed their practices to prevent the attack from being repeated.
Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active.
Symmetric ciphers
Boomerang attack
Brute-force attack
Davies' attack
Differential cryptanalysis
Harvest now, decrypt later
Impossible differential cryptanalysis
Improbable differential cryptanalysis
Integral cryptanalysis
Linear cryptanalysis
Meet-in-the-middle attack
Mod-n cryptanalysis
Related-key attack
Sandwich attack
Slide attack
XSL attack
Asymmetric ciphers
Asymmetric cryptography (or public-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way.
Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie–Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization – a breakthrough in factoring would impact the security of RSA.
In 1980, one could factor a difficult 50-digit number at an expense of 1012 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012 operations. Advances in computing technology also meant that the operations could be performed much faster. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used.
Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key.
Attacking cryptographic hash systems
Birthday attack
Hash function security summary
Rainbow table
Side-channel attacks
Black-bag cryptanalysis
Man-in-the-middle attack
Power analysis
Replay attack
Rubber-hose cryptanalysis
Timing analysis
Quantum computing applications for cryptanalysis
Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption.
By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length.
| Technology | Computer security | null |
5721 | https://en.wikipedia.org/wiki/Coma | Coma | A coma is a deep state of prolonged unconsciousness in which a person cannot be awakened, fails to respond normally to painful stimuli, light, or sound, lacks a normal wake–sleep cycle and does not initiate voluntary actions. The person may experience respiratory and circulatory problems due to the body's inability to maintain normal bodily functions. People in a coma often require extensive medical care to maintain their health and prevent complications such as pneumonia or blood clots. Coma patients exhibit a complete absence of wakefulness and are unable to consciously feel, speak or move. Comas can be the result of natural causes, or can be medically induced.
Clinically, a coma can be defined as the consistent inability to follow a one-step command. For a patient to maintain consciousness, the components of wakefulness and awareness must be maintained. Wakefulness is a quantitative assessment of the degree of consciousness, whereas awareness is a qualitative assessment of the functions mediated by the cortex, including cognitive abilities such as attention, sensory perception, explicit memory, language, the execution of tasks, temporal and spatial orientation and reality judgment. Neurologically, consciousness is maintained by the activation of the cerebral cortex—the gray matter that forms the brain's outermost layer—and by the reticular activating system (RAS), a structure in the brainstem.
Etymology
The term 'coma', from the Greek koma, meaning deep sleep, had already been used in the Hippocratic corpus (Epidemica) and later by Galen (second century AD). Subsequently, it was hardly used in the known literature up to the middle of the 17th century. The term is found again in Thomas Willis' (1621–1675) influential De anima brutorum (1672), where lethargy (pathological sleep), 'coma' (heavy sleeping), carus (deprivation of the senses) and apoplexy (into which carus could turn and which he localized in the white matter) are mentioned. The term carus is also derived from Greek, where it can be found in the roots of several words meaning soporific or sleepy. It can still be found in the root of the term 'carotid'. Thomas Sydenham (1624–89) mentioned the term 'coma' in several cases of fever (Sydenham, 1685).
Signs and symptoms
General symptoms of a person in a comatose state are:
Inability to voluntarily open the eyes
A nonexistent sleep–wake cycle
Lack of response to physical (painful) or verbal stimuli
Depressed brainstem reflexes, such as pupils not responding to light
Abnormal, difficulty, or irregular breathing or no breathing at all when coma was caused by cardiac arrest
Scores between 3 and 8 on the Glasgow Coma Scale
Causes
Many types of problems can cause a coma. Forty percent of comatose states result from drug poisoning. Certain drug use under certain conditions can damage or weaken the synaptic functioning in the ascending reticular activating system (ARAS) and keep the system from properly functioning to arouse the brain. Secondary effects of drugs, which include abnormal heart rate and blood pressure, as well as abnormal breathing and sweating, may also indirectly harm the functioning of the ARAS and lead to a coma. Given that drug poisoning is the cause for a large portion of patients in a coma, hospitals first test all comatose patients by observing pupil size and eye movement, through the vestibular–ocular reflex. (See Diagnosis below.)
The second most common cause of coma, which makes up about 25% of cases, is lack of oxygen, generally resulting from cardiac arrest. The central nervous system (CNS) requires a great deal of oxygen for its neurons. Oxygen deprivation in the brain, or cerebral hypoxia, causes sodium and calcium from outside of the neurons to decrease and intracellular calcium to increase, which harms neuron communication. Lack of oxygen in the brain also causes ATP exhaustion and cellular breakdown from cytoskeleton damage and nitric oxide production.
Twenty percent of comatose states result from an ischemic stroke, brain hemorrhage, or brain tumor. During a stroke, blood flow to part of the brain is restricted or blocked. An ischemic stroke, brain hemorrhage, or brain tumor may cause restriction of blood flow. Lack of blood to cells in the brain prevents oxygen from getting to the neurons, and consequently causes cells to become disrupted and die. As brain cells die, brain tissue continues to deteriorate, which may affect the functioning of the ARAS, causing unconsciousness and coma.
Comatose cases can also result from traumatic brain injury, excessive blood loss, malnutrition, hypothermia, hyperthermia, hyperammonemia, abnormal glucose levels, and many other biological disorders. Furthermore, studies show that 1 out of 8 patients with traumatic brain injury experience a comatose state.
Heart-related causes of coma include cardiac arrest, ventricular fibrillation, ventricular tachycardia, atrial fibrillation, myocardial infarction, heart failure, arrhythmia when severe, cardiogenic shock, myocarditis, and pericarditis. Respiratory arrest is the only lung condition to cause coma, but many different lung conditions can cause decreased level of consciousness, but do not reach coma.
Other causes of coma include severe or persistent seizures, kidney failure, liver failure, hyperglycemia, hypoglycemia, and infections involving the brain, like meningitis and encephalitis.
Pathophysiology
Injury to either or both of the cerebral cortex or the reticular activating system (RAS) is sufficient to cause a person to enter coma.
The cerebral cortex is the outer layer of neural tissue of the cerebrum of the brain. The cerebral cortex is composed of gray matter which consists of the nuclei of neurons, whereas the inner portion of the cerebrum is composed of white matter and is composed of the axons of neuron. White matter is responsible for perception, relay of the sensory input via the thalamic pathway, and many other neurological functions, including complex thinking.
The RAS, on the other hand, is a more primitive structure in the brainstem which includes the reticular formation (RF). The RAS has two tracts, the ascending and descending tract. The ascending tract, or ascending reticular activating system (ARAS), is made up of a system of acetylcholine-producing neurons, and works to arouse and wake up the brain. Arousal of the brain begins from the RF, through the thalamus, and then finally to the cerebral cortex. Any impairment in ARAS functioning, a neuronal dysfunction, along the arousal pathway stated directly above, prevents the body from being aware of its surroundings. Without the arousal and consciousness centers, the body cannot awaken, remaining in a comatose state.
The severity and mode of onset of coma depends on the underlying cause. There are two main subdivisions of a coma: structural and diffuse neuronal. A structural cause, for example, is brought upon by a mechanical force that brings about cellular damage, such as physical pressure or a blockage in neural transmission. By contrast, a diffuse cause is limited to aberrations of cellular function which fall under a metabolic or toxic subgroup. Toxin-induced comas are caused by extrinsic substances, whereas metabolic-induced comas are caused by intrinsic processes, such as body thermoregulation or ionic imbalances (e.g. sodium). For instance, severe hypoglycemia (low blood sugar) or hypercapnia (increased carbon dioxide levels in the blood) are examples of a metabolic diffuse neuronal dysfunction. Hypoglycemia or hypercapnia initially cause mild agitation and confusion, but progress to obtundation, stupor, and finally, complete unconsciousness. In contrast, coma resulting from a severe traumatic brain injury or subarachnoid hemorrhage can be instantaneous. The mode of onset may therefore be indicative of the underlying cause.
Structural and diffuse causes of coma are not isolated from one another, as one can lead to the other in some situations. For instance, coma induced by a diffuse metabolic process, such as hypoglycemia, can result in a structural coma if it is not resolved. Another example is if cerebral edema, a diffuse dysfunction, leads to ischemia of the brainstem, a structural issue, due to the blockage of the circulation in the brain.
Diagnosis
Although diagnosis of coma is simple, investigating the underlying cause of onset can be rather challenging. As such, after gaining stabilization of the patient's airways, breathing and circulation (the basic ABCs) various diagnostic tests, such as physical examinations and imaging tools (CT scan, MRI, etc.) are employed to access the underlying cause of the coma.
When an unconscious person enters a hospital, the hospital utilizes a series of diagnostic steps to identify the cause of unconsciousness. According to Young, the following steps should be taken when dealing with a patient possibly in a coma:
Perform a general examination and medical history check
Make sure the patient is in an actual comatose state and is not in a locked-in state or experiencing psychogenic unresponsiveness. Patients with locked-in syndrome present with voluntary movement of their eyes, whereas patients with psychogenic comas demonstrate active resistance to passive opening of the eyelids, with the eyelids closing abruptly and completely when the lifted upper eyelid is released (rather than slowly, asymmetrically and incompletely as seen in comas due to organic causes).
Find the site of the brain that may be causing coma (e.g., brainstem, back of brain...) and assess the severity of the coma with the Glasgow Coma Scale
Take blood work to see if drugs were involved or if it was a result of hypoventilation/hyperventilation
Check for levels of serum glucose, calcium, sodium, potassium, magnesium, phosphate, urea, and creatinine
Perform brain scans to observe any abnormal brain functioning using either CT or MRI scans
Continue to monitor brain waves and identify seizures of patient using EEGs
Initial evaluation
In the initial assessment of coma, it is common to gauge the level of consciousness on the AVPU (alert, vocal stimuli, painful stimuli, unresponsive) scale by spontaneously exhibiting actions and, assessing the patient's response to vocal and painful stimuli. More elaborate scales, such as the Glasgow Coma Scale, quantify an individual's reactions such as eye opening, movement and verbal response in order to indicate their extent of brain injury. The patient's score can vary from a score of 3 (indicating severe brain injury and death) to 15 (indicating mild or no brain injury).
In those with deep unconsciousness, there is a risk of asphyxiation as the control over the muscles in the face and throat is diminished. As a result, those presenting to a hospital with coma are typically assessed for this risk ("airway management"). If the risk of asphyxiation is deemed high, doctors may use various devices (such as an oropharyngeal airway, nasopharyngeal airway or endotracheal tube) to safeguard the airway.
Imaging and testing
Imaging encompasses computed tomography (CAT or CT) scan of the brain, or MRI for example, and is performed to identify specific causes of the coma, such as hemorrhage in the brain or herniation of the brain structures. Special tests such as an EEG can also show a lot about the activity level of the cortex such as semantic processing, presence of seizures, and are important available tools not only for the assessment of the cortical activity but also for predicting the likelihood of the patient's awakening. The autonomous responses such as the skin conductance response may also provide further insight on the patient's emotional processing.
In the treatment of traumatic brain injury (TBI), there are 4 examination methods that have proved useful: skull x-ray, angiography, computed tomography (CT), and magnetic resonance imaging (MRI). The skull x-ray can detect linear fractures, impression fractures (expression fractures) and burst fractures. Angiography is used on rare occasions for TBIs i.e. when there is suspicion of an aneurysm, carotid sinus fistula, traumatic vascular occlusion, and vascular dissection. A CT can detect changes in density between the brain tissue and hemorrhages like subdural and intracerebral hemorrhages. MRIs are not the first choice in emergencies because of the long scanning times and because fractures cannot be detected as well as CT. MRIs are used for the imaging of soft tissues and lesions in the posterior fossa which cannot be found with the use of CT.
Body movements
Assessment of the brainstem and cortical function through special reflex tests such as the oculocephalic reflex test (doll's eyes test), oculovestibular reflex test (cold caloric test), corneal reflex, and the gag reflex. Reflexes are a good indicator of what cranial nerves are still intact and functioning and is an important part of the physical exam. Due to the unconscious status of the patient, only a limited number of the nerves can be assessed. These include the cranial nerves number 2 (CN II), number 3 (CN III), number 5 (CN V), number 7 (CN VII), and cranial nerves 9 and 10 (CN IX, CN X).
Assessment of posture and physique is the next step. It involves general observation about the patient's positioning. There are often two stereotypical postures seen in comatose patients. Decorticate posturing is a stereotypical posturing in which the patient has arms flexed at the elbow, and arms adducted toward the body, with both legs extended. Decerebrate posturing is a stereotypical posturing in which the legs are similarly extended (stretched), but the arms are also stretched (extended at the elbow). The posturing is critical since it indicates where the damage is in the central nervous system. A decorticate posturing indicates a lesion (a point of damage) at or above the red nucleus, whereas a decerebrate posturing indicates a lesion at or below the red nucleus. In other words, a decorticate lesion is closer to the cortex, as opposed to a decerebrate posturing which indicates that the lesion is closer to the brainstem.
Pupil size
Pupil assessment is often a critical portion of a comatose examination, as it can give information as to the cause of the coma; the following table is a technical, medical guideline for common pupil findings and their possible interpretations:
Severity
A coma can be classified as (1) supratentorial (above Tentorium cerebelli), (2) infratentorial (below Tentorium cerebelli), (3) metabolic or (4) diffused. This classification is merely dependent on the position of the original damage that caused the coma, and does not correlate with severity or the prognosis. The severity of coma impairment however is categorized into several levels. Patients may or may not progress through these levels. In the first level, the brain responsiveness lessens, normal reflexes are lost, the patient no longer responds to pain and cannot hear.
The Rancho Los Amigos Scale is a complex scale that has eight separate levels, and is often used in the first few weeks or months of coma while the patient is under closer observation, and when shifts between levels are more frequent.
Treatment
Treatment for people in a coma will depend on the severity and cause of the comatose state. Upon admittance to an emergency department, coma patients will usually be placed in an Intensive Care Unit (ICU) immediately, where maintenance of the patient's respiration and circulation become a first priority. Stability of their respiration and circulation is sustained through the use of intubation, ventilation, administration of intravenous fluids or blood and other supportive care as needed.
Continued care
Once a patient is stable and no longer in immediate danger, there may be a shift of priority from stabilizing the patient to maintaining the state of their physical wellbeing. Moving patients every 2–3 hours by turning them side to side is crucial to avoiding bed sores as a result of being confined to a bed. Moving patients through the use of physical therapy also aids in preventing atelectasis, contractures or other orthopedic deformities which would interfere with a coma patient's recovery.
Pneumonia is also common in coma patients due to their inability to swallow which can then lead to aspiration. A coma patient's lack of a gag reflex and use of a feeding tube can result in food, drink or other solid organic matter being lodged within their lower respiratory tract (from the trachea to the lungs). This trapping of matter in their lower respiratory tract can ultimately lead to infection, resulting in aspiration pneumonia.
Coma patients may also deal with restlessness or seizures. As such, soft cloth restraints may be used to prevent them from pulling on tubes or dressings and side rails on the bed should be kept up to prevent patients from falling.
Caregivers
Coma has a wide variety of emotional reactions from the family members of the affected patients, as well as the primary care givers taking care of the patients. Research has shown that the severity of injury causing coma was found to have no significant impact compared to how much time has passed since the injury occurred. Common reactions, such as desperation, anger, frustration, and denial are possible. The focus of the patient care should be on creating an amicable relationship with the family members or dependents of a comatose patient as well as creating a rapport with the medical staff. Although there is heavy importance of a primary care taker, secondary care takers can play a supporting role to temporarily relieve the primary care taker's burden of tasks.
Prognosis
Comas can last from several days to, in particularly extreme cases, years. Some patients eventually gradually come out of the coma, some progress to a vegetative state or a minimally conscious state, and others die. Some patients who have entered a vegetative state go on to regain a degree of awareness; and in some cases may remain in vegetative state for years or even decades such as the Aruna Shanbaug case or Edwarda O'Bara.
Predicted chances of recovery will differ depending on which techniques were used to measure the patient's severity of neurological damage. Predictions of recovery are based on statistical rates, expressed as the level of chance the person has of recovering. Time is the best general predictor of a chance of recovery. For example, after four months of coma caused by brain damage, the chance of partial recovery is less than 15%, and the chance of full recovery is very low.
The outcome for coma and vegetative state depends on the cause, location, severity and extent of neurological damage. A deeper coma alone does not necessarily mean a slimmer chance of recovery; similarly, a milder coma does not indicate a higher chance of recovery. The most common cause of death for a person in a vegetative state is secondary infection such as pneumonia, which can occur in patients who lie still for extended periods.
Recovery
People may emerge from a coma with a combination of physical, intellectual, and psychological difficulties that need special attention. It is common for coma patients to awaken in a profound state of confusion and experience dysarthria, the inability to articulate any speech. Recovery is usually gradual. In the first days, the patient may only awaken for a few minutes, with increased duration of wakefulness as their recovery progresses, and they may eventually recover full awareness. That said, some patients may never progress beyond very basic responses.
There are reports of people coming out of a coma after long periods of time. After 19 years in a minimally conscious state, Terry Wallis spontaneously began speaking and regained awareness of his surroundings.
A man with brain damage and trapped in a coma-like state for six years was brought back to consciousness in 2003 by doctors who planted electrodes deep inside his brain. The method, called deep brain stimulation (DBS), successfully roused communication, complex movement and eating ability in the man with a traumatic brain injury. His injuries left him in a minimally conscious state, a condition akin to a coma but characterized by occasional, but brief, evidence of environmental and self-awareness that coma patients lack.
Society and culture
Research by Eelco Wijdicks on the depiction of comas in movies was published in Neurology in May 2006. Wijdicks studied 30 films (made between 1970 and 2004) that portrayed actors in prolonged comas, and he concluded that only two films accurately depicted the state of a coma patient and the agony of waiting for a patient to awaken: Reversal of Fortune (1990) and The Dreamlife of Angels (1998). The remaining 28 were criticized for portraying miraculous awakenings with no lasting side effects, unrealistic depictions of treatments and equipment required, and comatose patients remaining muscular and tanned.
Bioethics
A person in a coma is said to be in an unconscious state. Perspectives on personhood, identity and consciousness come into play when discussing the metaphysical and bioethical views on comas.
It has been argued that unawareness should be just as ethically relevant and important as a state of awareness and that there should be metaphysical support of unawareness as a state.
In the ethical discussions about disorders of consciousness (DOCs), two abilities are usually considered as central: experiencing well-being and having interest. Well-being can broadly be understood as the positive effect related to what makes life good (according to specific standards) for the individual in question. The only condition for well-being broadly considered is the ability to experience its 'positiveness'. That said, because experiencing positiveness is a basic emotional process with phylogenetic roots, it is likely to occur at a completely unaware level and, therefore, introduces the idea of an unconscious well-being. As such, the ability of having interests is crucial for describing two abilities which those with comas are deficient in. Having an interest in a certain domain can be understood as having a stake in something that can affect what makes our life good in that domain. An interest is what directly and immediately improves life from a certain point of view or within a particular domain, or greatly increases the likelihood of life improvement enabling the subject to realize some good. That said, sensitivity to reward signals is a fundamental element in the learning process, both consciously and unconsciously. Moreover, the unconscious brain is able to interact with its surroundings in a meaningful way and to produce meaningful information processing of stimuli coming from the external environment, including other people.
According to Hawkins, "1. A life is good if the subject is able to value, or more basically if the subject is able to care. Importantly, Hawkins stresses that caring has no need for cognitive commitment, i.e. for high-level cognitive activities: it requires being able to distinguish something, track it for a while, recognize it over time, and have certain emotional dispositions vis-à-vis something. 2. A life is good if the subject has the capacity for relationship with others, i.e. for meaningfully interacting with other people." This suggests that unawareness may (at least partly) fulfill both conditions identified by Hawkins for life to be good for a subject, thus making the unconscious ethically relevant.
| Biology and health sciences | Miscellaneous | null |
5738 | https://en.wikipedia.org/wiki/Cervix | Cervix | The cervix (: cervices) or cervix uteri is a dynamic fibromuscular sexual organ of the female reproductive system that connects the vagina with the uterine cavity. The human female cervix has been documented anatomically since at least the time of Hippocrates, over 2,000 years ago. The cervix is approximately 4 cm long with a diameter of approximately 3 cm and tends to be described as a cylindrical shape, although the front and back walls of the cervix are contiguous. The size of the cervix changes throughout a woman's life cycle. For example, during the fertile years of a woman's reproductive cycle, females tend to have a larger cervix in comparison to postmenopausal females; likewise, females who have produced offspring have a larger sized cervix than females who have not produced offspring.
In relation to the vagina, the part of the cervix that opens to the uterus is called the internal os and the opening of the cervix in the vagina is called the external os, while between is the conduit of the cervix, commonly referred to as the cervical canal. The lower part of the cervix, known as the vaginal portion of the cervix (or ectocervix), bulges into the top of the vagina. The endocervix borders the uterus. The cervical canal has at least two types of epithelia (lining): the endocervical lining is glandular epithelium that lines the endocervix with a single layer of column-shaped cells; while the ectocervical part of the conduit contains squamous epithelium. Squamous epithelium line the conduit with multiple layers of cells topped with flat cells. These two linings converge at the squamocolumnar junction (SCJ). This junction changes location dynamically throughout a woman's life.
Cervical infections with the human papillomavirus (HPV) can cause changes in the epithelium, which can lead to cancer of the cervix. Cervical cytology tests can detect cervical cancer and its precursors, and enable early successful treatment. Ways to avoid HPV include avoiding heterosexual sex, utilising penile condoms, and receiving the HPV vaccination. HPV vaccines, developed in the early 21st century, reduce the risk of developing cervical cancer by preventing infections from the main cancer-causing strains of HPV.
The conduit of the cervix is the organ that allows for blood to flow from a woman's uterus and out through her vagina at menstruation, which occurs in the absence of pregnancy.
Several methods of contraception aim to prevent fertilization by blocking this conduit, including cervical caps and cervical diaphragms, preventing the passage of sperm through the cervix. Other approaches aiming to prevent contraception include methods that observe cervical mucus, such as the Creighton Model and Billings method. Cervical mucus changes in consistency throughout women's menstrual periods, which may signal ovulation.
During vaginal childbirth, the cervix must flatten and dilate to allow the foetus to progress along the birth canal. Midwives and doctors use the extent of the dilation of the cervix to assist decision-making during childbirth.
Structure
The cervix is part of the female reproductive system. Around in length, it is the lower narrower part of the uterus continuous above with the broader upper part—or body—of the uterus. The lower end of the cervix bulges through the anterior wall of the vagina, and is referred to as the vaginal portion of cervix (or ectocervix) while the rest of the cervix above the vagina is called the supravaginal portion of cervix. A central canal, known as the cervical canal, runs along its length and connects the cavity of the body of the uterus with the lumen of the vagina. The openings are known as the internal os and external orifice of the uterus (or external os), respectively. The mucosa lining the cervical canal is known as the endocervix, and the mucosa covering the ectocervix is known as the exocervix. The cervix has an inner mucosal layer, a thick layer of smooth muscle, and posteriorly the supravaginal portion has a serosal covering consisting of connective tissue and overlying peritoneum.
In front of the upper part of the cervix lies the bladder, separated from it by cellular connective tissue known as parametrium, which also extends over the sides of the cervix. To the rear, the supravaginal cervix is covered by peritoneum, which runs onto the back of the vaginal wall and then turns upwards and onto the rectum, forming the recto-uterine pouch. The cervix is more tightly connected to surrounding structures than the rest of the uterus.
The cervical canal varies greatly in length and width between women or over the course of a woman's life, and it can measure 8 mm (0.3 inch) at its widest diameter in premenopausal adults. It is wider in the middle and narrower at each end. The anterior and posterior walls of the canal each have a vertical fold, from which ridges run diagonally upwards and laterally. These are known as palmate folds, due to their resemblance to a palm leaf. The anterior and posterior ridges are arranged in such a way that they interlock with each other and close the canal. They are often effaced after pregnancy.
The ectocervix (also known as the vaginal portion of the cervix) has a convex, elliptical shape and projects into the cervix between the anterior and posterior vaginal fornices. On the rounded part of the ectocervix is a small, depressed external opening, connecting the cervix with the vagina. The size and shape of the ectocervix and the external opening (external os) can vary according to age, hormonal state, and whether childbirth has taken place. In women who have not had a vaginal delivery, the external opening is small and circular, and in women who have had a vaginal delivery, it is slit-like. On average, the ectocervix is long and wide.
Blood is supplied to the cervix by the descending branch of the uterine artery and drains into the uterine vein. The pelvic splanchnic nerves, emerging as S2–S3, transmit the sensation of pain from the cervix to the brain. These nerves travel along the uterosacral ligaments, which pass from the uterus to the anterior sacrum.
Three channels facilitate lymphatic drainage from the cervix. The anterior and lateral cervix drains to nodes along the uterine arteries, travelling along the cardinal ligaments at the base of the broad ligament to the external iliac lymph nodes and ultimately the paraaortic lymph nodes. The posterior and lateral cervix drains along the uterine arteries to the internal iliac lymph nodes and ultimately the paraaortic lymph nodes, and the posterior section of the cervix drains to the obturator and presacral lymph nodes. However, there are variations as lymphatic drainage from the cervix travels to different sets of pelvic nodes in some people. This has implications in scanning nodes for involvement in cervical cancer.
After menstruation and directly under the influence of estrogen, the cervix undergoes a series of changes in position and texture. During most of the menstrual cycle, the cervix remains firm, and is positioned low and closed. However, as ovulation approaches, the cervix becomes softer and rises to open in response to the higher levels of estrogen present. These changes are also accompanied by changes in cervical mucus, described below.
Development
As a component of the female reproductive system, the cervix is derived from the two paramesonephric ducts (also called Müllerian ducts), which develop around the sixth week of embryogenesis. During development, the outer parts of the two ducts fuse, forming a single urogenital canal that will become the vagina, cervix and uterus. The cervix grows in size at a smaller rate than the body of the uterus, so the relative size of the cervix over time decreases, decreasing from being much larger than the body of the uterus in fetal life, twice as large during childhood, and decreasing to its adult size, smaller than the uterus, after puberty. Previously, it was thought that during fetal development, the original squamous epithelium of the cervix is derived from the urogenital sinus and the original columnar epithelium is derived from the paramesonephric duct. The point at which these two original epithelia meet is called the original squamocolumnar junction. New studies show, however, that all the cervical as well as large part of the vaginal epithelium are derived from Müllerian duct tissue and that phenotypic differences might be due to other causes.
Histology
The endocervical mucosa is about thick and lined with a single layer of columnar mucous cells. It contains numerous tubular mucous glands, which empty viscous alkaline mucus into the lumen. In contrast, the ectocervix is covered with nonkeratinized stratified squamous epithelium, which resembles the squamous epithelium lining the vagina. The junction between these two types of epithelia is called the squamocolumnar junction. Underlying both types of epithelium is a tough layer of collagen. The mucosa of the endocervix is not shed during menstruation. The cervix has more fibrous tissue, including collagen and elastin, than the rest of the uterus.
In prepubertal girls, the functional squamocolumnar junction is present just within the cervical canal. Upon entering puberty, due to hormonal influence, and during pregnancy, the columnar epithelium extends outward over the ectocervix as the cervix everts. Hence, this also causes the squamocolumnar junction to move outwards onto the vaginal portion of the cervix, where it is exposed to the acidic vaginal environment. The exposed columnar epithelium can undergo physiological metaplasia and change to tougher metaplastic squamous epithelium in days or weeks, which is very similar to the original squamous epithelium when mature. The new squamocolumnar junction is therefore internal to the original squamocolumnar junction, and the zone of unstable epithelium between the two junctions is called the transformation zone of the cervix. Histologically, the transformation zone is generally defined as surface squamous epithelium with surface columnar epithelium or stromal glands/crypts, or both.
After menopause, the uterine structures involute and the functional squamocolumnar junction moves into the cervical canal.
Nabothian cysts (or Nabothian follicles) form in the transformation zone where the lining of metaplastic epithelium has replaced mucous epithelium and caused a strangulation of the outlet of some of the mucous glands. A buildup of mucus in the glands forms Nabothian cysts, usually less than about in diameter, which are considered physiological rather than pathological. Both gland openings and Nabothian cysts are helpful to identify the transformation zone.
Function
Fertility
The cervical canal is a pathway through which sperm enter the uterus after being induced by estradiol after penile-vaginal intercourse, and some forms of artificial insemination. Some sperm remains in cervical crypts, infoldings of the endocervix, which act as a reservoir, releasing sperm over several hours and maximising the chances of fertilisation. A theory states the cervical and uterine contractions during orgasm draw semen into the uterus. Although the "upsuck theory" has been generally accepted for some years, it has been disputed due to lack of evidence, small sample size, and methodological errors.
Some methods of fertility awareness, such as the Creighton model and the Billings method involve estimating a woman's periods of fertility and infertility by observing physiological changes in her body. Among these changes are several involving the quality of her cervical mucus: the sensation it causes at the vulva, its elasticity (Spinnbarkeit), its transparency, and the presence of ferning.
Cervical mucus
Several hundred glands in the endocervix produce 20–60 mg of cervical mucus a day, increasing to 600 mg around the time of ovulation. It is viscous because it contains large proteins known as mucins. The viscosity and water content varies during the menstrual cycle; mucus is composed of around 93% water, reaching 98% at midcycle. These changes allow it to function either as a barrier or a transport medium to spermatozoa. It contains electrolytes such as calcium, sodium, and potassium; organic components such as glucose, amino acids, and soluble proteins; trace elements including zinc, copper, iron, manganese, and selenium; free fatty acids; enzymes such as amylase; and prostaglandins. Its consistency is determined by the influence of the hormones estrogen and progesterone. At midcycle around the time of ovulation—a period of high estrogen levels— the mucus is thin and serous to allow sperm to enter the uterus and is more alkaline and hence more hospitable to sperm. It is also higher in electrolytes, which results in the "ferning" pattern that can be observed in drying mucus under low magnification; as the mucus dries, the salts crystallize, resembling the leaves of a fern. The mucus has a stretchy character described as Spinnbarkeit most prominent around the time of ovulation.
At other times in the cycle, the mucus is thick and more acidic due to the effects of progesterone. This "infertile" mucus acts as a barrier to keep sperm from entering the uterus. Women taking an oral contraceptive pill also have thick mucus from the effects of progesterone. Thick mucus also prevents pathogens from interfering with a nascent pregnancy.
A cervical mucus plug, called the operculum, forms inside the cervical canal during pregnancy. This provides a protective seal for the uterus against the entry of pathogens and against leakage of uterine fluids. The mucus plug is also known to have antibacterial properties. This plug is released as the cervix dilates, either during the first stage of childbirth or shortly before. It is visible as a blood-tinged mucous discharge.
Childbirth
The cervix plays a major role in childbirth. As the fetus descends within the uterus in preparation for birth, the presenting part, usually the head, rests on and is supported by the cervix. As labour progresses, the cervix becomes softer and shorter, begins to dilate, and withdraws to face the anterior of the body. The support the cervix provides to the fetal head starts to give way when the uterus begins its contractions. During childbirth, the cervix must dilate to a diameter of more than to accommodate the head of the fetus as it descends from the uterus to the vagina. In becoming wider, the cervix also becomes shorter, a phenomenon known as effacement.
Along with other factors, midwives and doctors use the extent of cervical dilation to assist decision making during childbirth. Generally, the active first stage of labour, when the uterine contractions become strong and regular, begins when the cervical dilation is more than . The second phase of labor begins when the cervix has dilated to , which is regarded as its fullest dilation, and is when active pushing and contractions push the baby along the birth canal leading to the birth of the baby. The number of past vaginal deliveries is a strong factor in influencing how rapidly the cervix is able to dilate in labour. The time taken for the cervix to dilate and efface is one factor used in reporting systems such as the Bishop score, used to recommend whether interventions such as a forceps delivery, induction, or Caesarean section should be used in childbirth.
Cervical incompetence is a condition in which shortening of the cervix due to dilation and thinning occurs, before term pregnancy. Short cervical length is the strongest predictor of preterm birth.
Contraception
Several methods of contraception involve the cervix. Cervical diaphragms are reusable, firm-rimmed plastic devices inserted by a woman prior to intercourse that cover the cervix. Pressure against the walls of the vagina maintain the position of the diaphragm, and it acts as a physical barrier to prevent the entry of sperm into the uterus, preventing fertilisation. Cervical caps are a similar method, although they are smaller and adhere to the cervix by suction. Diaphragms and caps are often used in conjunction with spermicides. In one year, 12% of women using the diaphragm will undergo an unintended pregnancy, and with optimal use this falls to 6%. Efficacy rates are lower for the cap, with 18% of women undergoing an unintended pregnancy, and 10–13% with optimal use. Most types of progestogen-only pills are effective as a contraceptive because they thicken cervical mucus, making it difficult for sperm to pass along the cervical canal. In addition, they may also sometimes prevent ovulation. In contrast, contraceptive pills that contain both oestrogen and progesterone, the combined oral contraceptive pills, work mainly by preventing ovulation. They also thicken cervical mucus and thin the lining of the uterus, enhancing their effectiveness.
Clinical significance
Cancer
In 2008, cervical cancer was the third-most common cancer in women worldwide, with rates varying geographically from less than one to more than 50 cases per 100,000 women. It is a leading cause of cancer-related death in poor countries, where delayed diagnosis leading to poor outcomes is common. The introduction of routine screening has resulted in fewer cases of (and deaths from) cervical cancer, however this has mainly taken place in developed countries. Most developing countries have limited or no screening, and 85% of the global burden occurring there.
Cervical cancer nearly always involves human papillomavirus (HPV) infection. HPV is a virus with numerous strains, several of which predispose to precancerous changes in the cervical epithelium, particularly in the transformation zone, which is the most common area for cervical cancer to start. HPV vaccines, such as Gardasil and Cervarix, reduce the incidence of cervical cancer, by inoculating against the viral strains involved in cancer development.
Potentially precancerous changes in the cervix can be detected by cervical screening, using methods including a Pap smear (also called a cervical smear), in which epithelial cells are scraped from the surface of the cervix and examined under a microscope. The colposcope, an instrument used to see a magnified view of the cervix, was invented in 1925. The Pap smear was developed by Georgios Papanikolaou in 1928. A LEEP procedure using a heated loop of platinum to excise a patch of cervical tissue was developed by Aurel Babes in 1927. In some parts of the developed world including the UK, the Pap test has been superseded with liquid-based cytology.
An inexpensive, cost-effective and practical alternative in poorer countries is visual inspection with acetic acid (VIA). Instituting and sustaining cytology-based programs in these regions can be difficult, due to the need for trained personnel, equipment and facilities and difficulties in follow-up. With VIA, results and treatment can be available on the same day. As a screening test, VIA is comparable to cervical cytology in accurately identifying precancerous lesions.
A result of dysplasia is usually further investigated, such as by taking a cone biopsy, which may also remove the cancerous lesion. Cervical intraepithelial neoplasia is a possible result of the biopsy and represents dysplastic changes that may eventually progress to invasive cancer. Most cases of cervical cancer are detected in this way, without having caused any symptoms. When symptoms occur, they may include vaginal bleeding, discharge, or discomfort.
Inflammation
Inflammation of the cervix is referred to as cervicitis. This inflammation may be of the endocervix or ectocervix. When associated with the endocervix, it is associated with a mucous vaginal discharge and sexually transmitted infections such as chlamydia and gonorrhoea. As many as half of pregnant women having a gonorrheal infection of the cervix are asymptomatic. Other causes include overgrowth of the commensal flora of the vagina. When associated with the ectocervix, inflammation may be caused by the herpes simplex virus. Inflammation is often investigated through directly visualising the cervix using a speculum, which may appear whiteish due to exudate, and by taking a Pap smear and examining for causal bacteria. Special tests may be used to identify particular bacteria. If the inflammation is due to a bacterium, then antibiotics may be given as treatment.
Anatomical abnormalities
Cervical stenosis is an abnormally narrow cervical canal, typically associated with trauma caused by removal of tissue for investigation or treatment of cancer, or cervical cancer itself. Diethylstilbestrol, used from 1938 to 1971 to prevent preterm labour and miscarriage, is also strongly associated with the development of cervical stenosis and other abnormalities in the daughters of the exposed women. Other abnormalities include: vaginal adenosis, in which the squamous epithelium of the ectocervix becomes columnar; cancers such as clear cell adenocarcinomas; cervical ridges and hoods; and development of a cockscomb cervix appearance, which is the condition wherein, as the name suggests, the cervix of the uterus is shaped like a cockscomb. About one third of women born to diethylstilbestrol-treated mothers (i.e. in-utero exposure) develop a cockscomb cervix.
Enlarged folds or ridges of cervical stroma (fibrous tissues) and epithelium constitute a cockscomb cervix. Similarly, cockscomb polyps lining the cervix are usually considered or grouped into the same overarching description. It is in and of itself considered a benign abnormality; its presence, however is usually indicative of DES exposure, and as such women who experience these abnormalities should be aware of their increased risk of associated pathologies.
Cervical agenesis is a rare congenital condition in which the cervix completely fails to develop, often associated with the concurrent failure of the vagina to develop. Other congenital cervical abnormalities exist, often associated with abnormalities of the vagina and uterus. The cervix may be duplicated in situations such as bicornuate uterus and uterine didelphys.
Cervical polyps, which are benign overgrowths of endocervical tissue, if present, may cause bleeding, or a benign overgrowth may be present in the cervical canal. Cervical ectropion refers to the horizontal overgrowth of the endocervical columnar lining in a one-cell-thick layer over the ectocervix.
Animals
Female marsupials have paired uteri and cervices. Most eutherian (placental) mammal species have a single cervix and a single, bipartite or bicornuate uterus. Lagomorphs, rodents, aardvarks and hyraxes have a duplex uterus and two cervices. Lagomorphs and rodents share many morphological characteristics and are grouped together in the clade Glires. Anteaters of the family Myrmecophagidae are unusual in that they lack a defined cervix; they are thought to have lost the characteristic rather than other mammals developing a cervix on more than one lineage. In domestic pigs, the cervix contains a series of five interdigitating pads that hold the boar's corkscrew-shaped penis during copulation.
Etymology and pronunciation
The word cervix () came to English from the Latin cervīx, where it means "neck", and like its Germanic counterpart, it can refer not only to the neck [of the body] but also to an analogous narrowed part of an object. The cervix uteri (neck of the uterus) is thus the uterine cervix, but in English the word cervix used alone usually refers to it. Thus the adjective cervical may refer either to the neck (as in cervical vertebrae or cervical lymph nodes) or to the uterine cervix (as in cervical cap or cervical cancer).
Latin cervix came from the Proto-Indo-European root ker-, referring to a "structure that projects". Thus, the word cervix is linguistically related to the English word "horn", the Persian word for "head" ( sar), the Greek word for "head" ( koruphe), and the Welsh and Romanian words for "deer" (, Romanian: cerb).
The cervix was documented in anatomical literature in at least the time of Hippocrates; cervical cancer was first described more than 2,000 years ago, with descriptions provided by both Hippocrates and Aretaeus. However, there was some variation in word sense among early writers, who used the term to refer to both the cervix and the internal uterine orifice. The first attested use of the word to refer to the cervix of the uterus was in 1702.
| Biology and health sciences | Reproductive system | Biology |
5739 | https://en.wikipedia.org/wiki/Compiler | Compiler | In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.
There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language.
Related software include decompilers, programs that translate from low-level languages to higher level ones; programs that translate between high-level languages, usually called source-to-source compilers or transpilers; language rewriters, usually programs that translate the form of expressions without a change of language; and compiler-compilers, compilers that produce compilers (or parts of them), often in a generic and reusable way so as to be able to produce many differing compilers.
A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness.
Compilers are not the only language processor used to transform source programs. An interpreter is computer software that transforms and then executes the indicated operations. The translation process influences the design of computer languages, which leads to a preference of compilation or interpretation. In theory, a programming language can have both a compiler and an interpreter. In practice, programming languages tend to be associated with just one (a compiler or an interpreter).
History
Theoretical computing concepts developed by scientists, mathematicians, and engineers formed the basis of digital modern computing development during World War II. Primitive binary languages evolved because digital devices only understand ones and zeros and the circuit patterns in the underlying machine architecture. In the late 1940s, assembly languages were created to offer a more workable abstraction of the computer architectures. Limited memory capacity of early computers led to substantial technical challenges when the first compilers were designed. Therefore, the compilation process needed to be divided into several small programs. The front end programs produce the analysis products used by the back end programs to generate target code. As computer technology provided more resources, compiler designs could align better with the compilation process.
It is usually more productive for a programmer to use a high-level language, so the development of high-level languages followed naturally from the capabilities offered by digital computers. High-level languages are formal languages that are strictly defined by their syntax and semantics which form the high-level language architecture. Elements of these formal languages include:
Alphabet, any finite set of symbols;
String, a finite sequence of symbols;
Language, any set of strings on an alphabet.
The sentences in a language may be defined by a set of rules called a grammar.
Backus–Naur form (BNF) describes the syntax of "sentences" of a language. It was developed by John Backus and used for the syntax of Algol 60. The ideas derive from the context-free grammar concepts by linguist Noam Chomsky. "BNF and its extensions have become standard tools for describing the syntax of programming notations. In many cases, parts of compilers are generated automatically from a BNF description."
Between 1942 and 1945, Konrad Zuse designed the first (algorithmic) programming language for computers called ("Plan Calculus"). Zuse also envisioned a ("Plan assembly device") to automatically translate the mathematical formulation of a program into machine-readable punched film stock. While no actual implementation occurred until the 1970s, it presented concepts later seen in APL designed by Ken Iverson in the late 1950s. APL is a language for mathematical computations.
Between 1949 and 1951, Heinz Rutishauser proposed Superplan, a high-level language and automatic translator. His ideas were later refined by Friedrich L. Bauer and Klaus Samelson.
High-level language design during the formative years of digital computing provided useful programming tools for a variety of applications:
FORTRAN (Formula Translation) for engineering and science applications is considered to be one of the first actually implemented high-level languages and first optimizing compiler.
COBOL (Common Business-Oriented Language) evolved from A-0 and FLOW-MATIC to become the dominant high-level language for business applications.
LISP (List Processor) for symbolic computation.
Compiler technology evolved from the need for a strictly defined transformation of the high-level source program into a low-level target program for the digital computer. The compiler could be viewed as a front end to deal with the analysis of the source code and a back end to synthesize the analysis into the target code. Optimization between the front end and back end could produce more efficient target code.
Some early milestones in the development of compiler technology:
May 1952: Grace Hopper's team at Remington Rand wrote the compiler for the A-0 programming language (and coined the term compiler to describe it), although the A-0 compiler functioned more as a loader or linker than the modern notion of a full compiler.
1952, before September: An Autocode compiler developed by Alick Glennie for the Manchester Mark I computer at the University of Manchester is considered by some to be the first compiled programming language.
1954–1957: A team led by John Backus at IBM developed FORTRAN which is usually considered the first high-level language. In 1957, they completed a FORTRAN compiler that is generally credited as having introduced the first unambiguously complete compiler.
1959: The Conference on Data Systems Language (CODASYL) initiated development of COBOL. The COBOL design drew on A-0 and FLOW-MATIC. By the early 1960s COBOL was compiled on multiple architectures.
1958–1960: Algol 58 was the precursor to ALGOL 60. It introduced code blocks, a key advance in the rise of structured programming. ALGOL 60 was the first language to implement nested function definitions with lexical scope. It included recursion. Its syntax was defined using BNF. ALGOL 60 inspired many languages that followed it. Tony Hoare remarked: "... it was not only an improvement on its predecessors but also on nearly all its successors."
1958–1962: John McCarthy at MIT designed LISP. The symbol processing capabilities provided useful features for artificial intelligence research. In 1962, LISP 1.5 release noted some tools: an interpreter written by Stephen Russell and Daniel J. Edwards, a compiler and assembler written by Tim Hart and Mike Levin.
Early operating systems and software were written in assembly language. In the 1960s and early 1970s, the use of high-level languages for system programming was still controversial due to resource limitations. However, several research and industry efforts began the shift toward high-level systems programming languages, for example, BCPL, BLISS, B, and C.
BCPL (Basic Combined Programming Language) designed in 1966 by Martin Richards at the University of Cambridge was originally developed as a compiler writing tool. Several compilers have been implemented, Richards' book provides insights to the language and its compiler. BCPL was not only an influential systems programming language that is still used in research but also provided a basis for the design of B and C languages.
BLISS (Basic Language for Implementation of System Software) was developed for a Digital Equipment Corporation (DEC) PDP-10 computer by W. A. Wulf's Carnegie Mellon University (CMU) research team. The CMU team went on to develop BLISS-11 compiler one year later in 1970.
Multics (Multiplexed Information and Computing Service), a time-sharing operating system project, involved MIT, Bell Labs, General Electric (later Honeywell) and was led by Fernando Corbató from MIT. Multics was written in the PL/I language developed by IBM and IBM User Group. IBM's goal was to satisfy business, scientific, and systems programming requirements. There were other languages that could have been considered but PL/I offered the most complete solution even though it had not been implemented. For the first few years of the Multics project, a subset of the language could be compiled to assembly language with the Early PL/I (EPL) compiler by Doug McIlory and Bob Morris from Bell Labs. EPL supported the project until a boot-strapping compiler for the full PL/I could be developed.
Bell Labs left the Multics project in 1969, and developed a system programming language B based on BCPL concepts, written by Dennis Ritchie and Ken Thompson. Ritchie created a boot-strapping compiler for B and wrote Unics (Uniplexed Information and Computing Service) operating system for a PDP-7 in B. Unics eventually became spelled Unix.
Bell Labs started the development and expansion of C based on B and BCPL. The BCPL compiler had been transported to Multics by Bell Labs and BCPL was a preferred language at Bell Labs. Initially, a front-end program to Bell Labs' B compiler was used while a C compiler was developed. In 1971, a new PDP-11 provided the resource to define extensions to B and rewrite the compiler. By 1973 the design of C language was essentially complete and the Unix kernel for a PDP-11 was rewritten in C. Steve Johnson started development of Portable C Compiler (PCC) to support retargeting of C compilers to new machines.
Object-oriented programming (OOP) offered some interesting possibilities for application development and maintenance. OOP concepts go further back but were part of LISP and Simula language science. Bell Labs became interested in OOP with the development of C++. C++ was first used in 1980 for systems programming. The initial design leveraged C language systems programming capabilities with Simula concepts. Object-oriented facilities were added in 1983. The Cfront program implemented a C++ front-end for C84 language compiler. In subsequent years several C++ compilers were developed as C++ popularity grew.
In many application domains, the idea of using a higher-level language quickly caught on. Because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers became more complex.
DARPA (Defense Advanced Research Projects Agency) sponsored a compiler project with Wulf's CMU research team in 1970. The Production Quality Compiler-Compiler PQCC design would produce a Production Quality Compiler (PQC) from formal definitions of source language and the target. PQCC tried to extend the term compiler-compiler beyond the traditional meaning as a parser generator (e.g., Yacc) without much success. PQCC might more properly be referred to as a compiler generator.
PQCC research into code generation process sought to build a truly automatic compiler-writing system. The effort discovered and designed the phase structure of the PQC. The BLISS-11 compiler provided the initial structure. The phases included analyses (front end), intermediate translation to virtual machine (middle end), and translation to the target (back end). TCOL was developed for the PQCC research to handle language specific constructs in the intermediate representation. Variations of TCOL supported various languages. The PQCC project investigated techniques of automated compiler construction. The design concepts proved useful in optimizing compilers and compilers for the (since 1995, object-oriented) programming language Ada.
The Ada STONEMAN document formalized the program support environment (APSE) along with the kernel (KAPSE) and minimal (MAPSE). An Ada interpreter NYU/ED supported development and standardization efforts with the American National Standards Institute (ANSI) and the International Standards Organization (ISO). Initial Ada compiler development by the U.S. Military Services included the compilers in a complete integrated design environment along the lines of the STONEMAN document. Army and Navy worked on the Ada Language System (ALS) project targeted to DEC/VAX architecture while the Air Force started on the Ada Integrated Environment (AIE) targeted to IBM 370 series. While the projects did not provide the desired results, they did contribute to the overall effort on Ada development.
Other Ada compiler efforts got underway in Britain at the University of York and in Germany at the University of Karlsruhe. In the U. S., Verdix (later acquired by Rational) delivered the Verdix Ada Development System (VADS) to the Army. VADS provided a set of development tools including a compiler. Unix/VADS could be hosted on a variety of Unix platforms such as DEC Ultrix and the Sun 3/60 Solaris targeted to Motorola 68020 in an Army CECOM evaluation. There were soon many Ada compilers available that passed the Ada Validation tests. The Free Software Foundation GNU project developed the GNU Compiler Collection (GCC) which provides a core capability to support multiple languages and targets. The Ada version GNAT is one of the most widely used Ada compilers. GNAT is free but there is also commercial support, for example, AdaCore, was founded in 1994 to provide commercial software solutions for Ada. GNAT Pro includes the GNU GCC based GNAT with a tool suite to provide an integrated development environment.
High-level languages continued to drive compiler research and development. Focus areas included optimization and automatic code generation. Trends in programming languages and development environments influenced compiler technology. More compilers became included in language distributions (PERL, Java Development Kit) and as a component of an IDE (VADS, Eclipse, Ada Pro). The interrelationship and interdependence of technologies grew. The advent of web services promoted growth of web languages and scripting languages. Scripts trace back to the early days of Command Line Interfaces (CLI) where the user could enter commands to be executed by the system. User Shell concepts developed with languages to write shell programs. Early Windows designs offered a simple batch programming capability. The conventional transformation of these language used an interpreter. While not widely used, Bash and Batch compilers have been written. More recently sophisticated interpreted languages became part of the developers tool kit. Modern scripting languages include PHP, Python, Ruby and Lua. (Lua is widely used in game development.) All of these have interpreter and compiler support.
"When the field of compiling began in the late 50s, its focus was limited to the translation of high-level language programs into machine code ... The compiler field is increasingly intertwined with other disciplines including computer architecture, programming languages, formal methods, software engineering, and computer security." The "Compiler Research: The Next 50 Years" article noted the importance of object-oriented languages and Java. Security and parallel computing were cited among the future research targets.
Compiler construction
A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets.
In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once.
A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that focus development on the functions in the compilation process.
One-pass vis-à-vis multi-pass compilers
Classifying compilers by number of passes has its background in the hardware resource limitations of computers. Compiling involves performing much work and early computers did not have enough memory to contain one program that did all of this work. As a result, compilers were split up into smaller programs which each made a pass over the source (or some representation of it) performing some of the required analysis and translations.
The ability to compile in a single pass has classically been seen as a benefit because it simplifies the job of writing a compiler and one-pass compilers generally perform compilations faster than multi-pass compilers. Thus, partly driven by the resource limitations of early systems, many early languages were specifically designed so that they could be compiled in a single pass (e.g., Pascal).
In some cases, the design of a language feature may require a compiler to perform more than one pass over the source. For instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass.
The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once.
Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program.
Three-stage compiler structure
Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end.
The front end scans the input and verifies syntax and semantics according to a specific source language. For statically typed languages it performs type checking by collecting type information. If the input program is syntactically incorrect or has a type error, it generates error and/or warning messages, usually identifying the location in the source code where the problem was detected; in some cases the actual error may be (much) earlier in the program. Aspects of the front end include lexical analysis, syntax analysis, and semantic analysis. The front end transforms the input program into an intermediate representation (IR) for further processing by the middle end. This IR is usually a lower-level representation of the program with respect to the source code.
The middle end performs optimizations on the IR that are independent of the CPU architecture being targeted. This source code/machine code independence is intended to enable generic optimizations to be shared between versions of the compiler supporting different languages and target processors. Examples of middle end optimizations are removal of useless (dead-code elimination) or unreachable code (reachability analysis), discovery and propagation of constant values (constant propagation), relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context, eventually producing the "optimized" IR that is used by the back end.
The back end takes the optimized IR from the middle end. It may perform more analysis, transformations and optimizations that are specific for the target CPU architecture. The back end generates the target-dependent assembly code, performing register allocation in the process. The back end performs instruction scheduling, which re-orders instructions to keep parallel execution units busy by filling delay slots. Although most optimization problems are NP-hard, heuristic techniques for solving them are well-developed and implemented in production-quality compilers. Typically the output of a back end is machine code specialized for a particular processor and operating system.
This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends.
Front end
The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope.
While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare.
The main phases of the front end include the following:
converts the input character sequence to a canonical form ready for the parser. Languages which strop their keywords or allow arbitrary spaces within identifiers require this phase. The top-down, recursive-descent, table-driven parsers used in the 1960s typically read the source one character at a time and did not require a separate tokenizing phase. Atlas Autocode and Imp (and some implementations of ALGOL and Coral 66) are examples of stropped languages whose compilers would have a Line Reconstruction phase.
Preprocessing supports macro substitution and conditional compilation. Typically the preprocessing phase occurs before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates lexical tokens rather than syntactic forms. However, some languages such as Scheme support macro substitutions based on syntactic forms.
Lexical analysis (also known as lexing or tokenization) breaks the source code text into a sequence of small pieces called lexical tokens. This phase can be divided into two stages: the scanning, which segments the input text into syntactic units called lexemes and assigns them a category; and the evaluating, which converts lexemes into a processed value. A token is a pair consisting of a token name and an optional token value. Common token categories may include identifiers, keywords, separators, operators, literals and comments, although the set of token categories varies in different programming languages. The lexeme syntax is typically a regular language, so a finite-state automaton constructed from a regular expression can be used to recognize it. The software doing lexical analysis is called a lexical analyzer. This may not be a separate step—it can be combined with the parsing step in scannerless parsing, in which case parsing is done at the character level, not the token level.
Syntax analysis (also known as parsing) involves parsing the token sequence to identify the syntactic structure of the program. This phase typically builds a parse tree, which replaces the linear sequence of tokens with a tree structure built according to the rules of a formal grammar which define the language's syntax. The parse tree is often analyzed, augmented, and transformed by later phases in the compiler.
Semantic analysis adds semantic information to the parse tree and builds the symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object binding (associating variable and function references with their definitions), or definite assignment (requiring all local variables to be initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis usually requires a complete parse tree, meaning that this phase logically follows the parsing phase, and logically precedes the code generation phase, though it is often possible to fold multiple phases into one pass over the code in a compiler implementation.
Middle end
The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted.
The main phases of the middle end include the following:
Analysis: This is the gathering of program information from the intermediate representation derived from the input; data-flow analysis is used to build use-define chains, together with dependence analysis, alias analysis, pointer analysis, escape analysis, etc. Accurate analysis is the basis for any compiler optimization. The control-flow graph of every compiled function and the call graph of the program are usually also built during the analysis phase.
Optimization: the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are inline expansion, dead-code elimination, constant propagation, loop transformation and even automatic parallelization.
Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation.
The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously.
Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes.
Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled.
Back end
The back end is responsible for the CPU architecture specific optimizations and for code generation.
The main phases of the back end include the following:
Machine dependent optimizations: optimizations that depend on the details of the CPU architecture that the compiler targets. A prominent example is peephole optimizations, which rewrites short sequences of assembler instructions into more efficient instructions.
Code generation: the transformed intermediate language is translated into the output language, usually the native machine language of the system. This involves resource and storage decisions, such as deciding which variables to fit into registers and memory and the selection and scheduling of appropriate machine instructions along with their associated addressing modes (see also Sethi–Ullman algorithm). Debug data may also need to be generated to facilitate debugging.
Compiler correctness
Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler.
Compiled vis-à-vis interpreted languages
Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters.
Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a set of directly executed machine instructions is needed somewhere at the bottom of the execution stack (see machine language).
Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further.
Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself.
Types
One classification of compilers is by the platform on which their generated code executes. This is known as the target platform.
A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment.
The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers.
The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers.
While a common compiler type outputs machine code, there are many other types:
Source-to-source compilers are a type of compiler that takes a high-level language as its input and outputs a high-level language. For example, an automatic parallelizing compiler will frequently take in a high-level language program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALL statements). Other terms for a source-to-source compiler are transcompiler or transpiler.
Bytecode compilers compile to assembly language of a theoretical machine, like some Prolog implementations
This Prolog machine is also known as the Warren Abstract Machine (or WAM).
Bytecode compilers for Java, Python are also examples of this category.
Just-in-time compilers (JIT compiler) defer compilation until runtime. JIT compilers exist for many modern languages including Python, JavaScript, Smalltalk, Java, Microsoft .NET's Common Intermediate Language (CIL) and others. A JIT compiler generally runs inside an interpreter. When the interpreter detects that a code path is "hot", meaning it is executed frequently, the JIT compiler will be invoked and compile the "hot" code for increased performance.
For some languages, such as Java, applications are first compiled using a bytecode compiler and delivered in a machine-independent intermediate representation. A bytecode interpreter executes the bytecode, but the JIT compiler will translate the bytecode to machine code when increased performance is necessary.
Hardware compilers (also known as synthesis tools) are compilers whose input is a hardware description language and whose output is a description, in the form of a netlist or otherwise, of a hardware configuration.
The output of these compilers target computer hardware at a very low level, for example a field-programmable gate array (FPGA) or structured application-specific integrated circuit (ASIC). Such compilers are said to be hardware compilers, because the source code they compile effectively controls the final configuration of the hardware and how it operates. The output of the compilation is only an interconnection of transistors or lookup tables.
An example of hardware compiler is XST, the Xilinx Synthesis Tool used for configuring FPGAs. Similar tools are available from Altera, Synplicity, Synopsys and other hardware vendors.
A program that translates from a low-level language to a higher level one is a decompiler.
A program that translates into an object code format that is not supported on the compilation machine is called a cross compiler and is commonly used to prepare code for execution on embedded software applications.
A program that rewrites object code back into the same type of object code while applying optimisations and transformations is a binary recompiler.
Assemblers, which translate human readable assembly language to the machine code instructions executed by hardware, are not considered compilers. (The inverse program that translates machine code to assembly language is called a disassembler.)
| Technology | Programming | null |
5759 | https://en.wikipedia.org/wiki/Complex%20analysis | Complex analysis | Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.
As a differentiable function of a complex variable is equal to the sum function given by its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions.
The concept can be extended to functions of several complex variables.
History
Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory.
Complex functions
A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane.
For any complex function, the values from the domain and their images in the range may be separated into real and imaginary parts:
where are all real-valued.
In other words, a complex function may be decomposed into
and
i.e., into two real-valued functions (, ) of two real variables (, ).
Similarly, any complex-valued function on an arbitrary set (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: or, alternatively, as a vector-valued function from into
Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions.
Holomorphic functions
Complex functions that are differentiable at every point of an open subset of the complex plane are said to be holomorphic on In the context of complex analysis, the derivative of at is defined to be
Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on can be approximated arbitrarily well by polynomials in some neighborhood of every point in . This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see .
Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions are holomorphic over the entire complex plane, making them entire functions, while rational functions , where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions and are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).
An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If , defined by where is holomorphic on a region then for all ,
In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations and , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem).
Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: or for some In other words, if two distinct complex numbers and are not in the range of an entire function then is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset.
Conformal map
Major results
One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.
A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed.
If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface.
All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions.
A major application of certain complex spaces is in quantum mechanics as wave functions.
| Mathematics | Calculus and analysis | null |
5762 | https://en.wikipedia.org/wiki/Civil%20engineering | Civil engineering | Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways.
Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to Fortune Global 500 companies.
History
Civil engineering as a discipline
Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields.
Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances. Structures, roads, and infrastructure that existed were repetitive, and increases in scale were incremental.
One of the earliest examples of a scientific approach to physical and mathematical problems applicable to civil engineering is the work of Archimedes in the 3rd century BC, including Archimedes' principle, which underpins our understanding of buoyancy, and practical solutions such as Archimedes' screw. Brahmagupta, an Indian mathematician, used arithmetic in the 7th century AD, based on Hindu-Arabic numerals, for excavation (volume) computations.
Civil engineering profession
Engineering has been an aspect of life since the beginnings of human existence. The earliest practice of civil engineering may have commenced between 4000 and 2000 BC in ancient Egypt, the Indus Valley civilization, and Mesopotamia (ancient Iraq) when humans started to abandon a nomadic existence, creating a need for the construction of shelter. During this time, transportation became increasingly important leading to the development of the wheel and sailing.
Until modern times there was no clear distinction between civil engineering and architecture, and the term engineer and architect were mainly geographical variations referring to the same occupation, and often used interchangeably. The constructions of pyramids in Egypt (–2500 BC) constitute some of the first instances of large structure constructions in history. Other ancient historic civil engineering constructions include the Qanat water management system in modern-day Iran (the oldest is older than 3000 years and longer than ), the Parthenon by Iktinos in Ancient Greece (447–438 BC), the Appian Way by Roman engineers (), the Great Wall of China by General Meng T'ien under orders from Ch'in Emperor Shih Huang Ti () and the stupas constructed in ancient Sri Lanka like the Jetavanaramaya and the extensive irrigation works in Anuradhapura. The Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbors, bridges, dams and roads.
In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering. In 1747, the first institution for the teaching of civil engineering, the École Nationale des Ponts et Chaussées, was established in France; and more examples followed in other European countries, like Spain. The first self-proclaimed civil engineer was John Smeaton, who constructed the Eddystone Lighthouse. In 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some technical meetings, it was little more than a social society.
In 1818 the Institution of Civil Engineers was founded in London, and in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal charter in 1828, formally recognising civil engineering as a profession. Its charter defined civil engineering as:
Civil engineering education
The first private college to teach civil engineering in the United States was Norwich University, founded in 1819 by Captain Alden Partridge. The first degree in civil engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905.
In the UK during the early 19th century, the division between civil engineering and military engineering (served by the Royal Military Academy, Woolwich), coupled with the demands of the Industrial Revolution, spawned new engineering education initiatives: the Class of Civil Engineering and Mining was founded at King's College London in 1838, mainly as a response to the growth of the railway system and the need for more qualified engineers, the private College for Civil Engineers in Putney was established in 1839, and the UK's first Chair of Engineering was established at the University of Glasgow in 1840.
Education
Civil engineers typically possess an academic degree in civil engineering. The length of study is three to five years, and the completed degree is designated as a bachelor of technology, or a bachelor of engineering. The curriculum generally includes classes in physics, mathematics, project management, design and specific topics in civil engineering. After taking basic courses in most sub-disciplines of civil engineering, they move on to specialize in one or more sub-disciplines at advanced levels. While an undergraduate degree (BEng/BSc) normally provides successful students with industry-accredited qualifications, some academic institutions offer post-graduate degrees (MEng/MSc), which allow students to further specialize in their particular area of interest.
Practicing engineers
In most countries, a bachelor's degree in engineering represents the first step towards professional certification, and a professional body certifies the degree program. After completing a certified degree program, the engineer must satisfy a range of requirements including work experience and exam requirements before being certified. Once certified, the engineer is designated as a professional engineer (in the United States, Canada and South Africa), a chartered engineer (in most Commonwealth countries), a chartered professional engineer (in Australia and New Zealand), or a European engineer (in most countries of the European Union). There are international agreements between relevant professional bodies to allow engineers to practice across national borders.
The benefits of certification vary depending upon location. For example, in the United States and Canada, "only a licensed professional engineer may prepare, sign and seal, and submit engineering plans and drawings to a public authority for approval, or seal engineering work for public and private clients." This requirement is enforced under provincial law such as the Engineers Act in Quebec. No such legislation has been enacted in other countries including the United Kingdom. In Australia, state licensing of engineers is limited to the state of Queensland. Almost all certifying bodies maintain a code of ethics which all members must abide by.
Engineers must obey contract law in their contractual relationships with other parties. In cases where an engineer's work fails, they may be subject to the law of tort of negligence, and in extreme cases, criminal charges. An engineer's work must also comply with numerous other rules and regulations such as building codes and environmental law.
Sub-disciplines
There are a number of sub-disciplines within the broad field of civil engineering. General civil engineers work closely with surveyors and specialized civil engineers to design grading, drainage, pavement, water supply, sewer service, dams, electric and communications supply. General civil engineering is also referred to as site engineering, a branch of civil engineering that primarily focuses on converting a tract of land from one usage to another. Site engineers spend time visiting project sites, meeting with stakeholders, and preparing construction plans. Civil engineers apply the principles of geotechnical engineering, structural engineering, environmental engineering, transportation engineering and construction engineering to residential, commercial, industrial and public works projects of all sizes and levels of construction.
Coastal engineering
Coastal engineering is concerned with managing coastal areas. In some jurisdictions, the terms sea defense and coastal protection mean defense against flooding and erosion, respectively. Coastal defense is the more traditional term, but coastal management has become popular as well.
Construction engineering
Construction engineering involves planning and execution, transportation of materials, and site development based on hydraulic, environmental, structural, and geotechnical engineering. As construction firms tend to have higher business risk than other types of civil engineering firms, construction engineers often engage in more business-like transactions, such as drafting and reviewing contracts, evaluating logistical operations, and monitoring supply prices.
Earthquake engineering
Earthquake engineering involves designing structures to withstand hazardous earthquake exposures. Earthquake engineering is a sub-discipline of structural engineering. The main objectives of earthquake engineering are to understand interaction of structures on the shaky ground; foresee the consequences of possible earthquakes; and design, construct and maintain structures to perform at earthquake in compliance with building codes.
Environmental engineering
Environmental engineering is the contemporary term for sanitary engineering, though sanitary engineering traditionally had not included much of the hazardous waste management and environmental remediation work covered by environmental engineering. Public health engineering and environmental health engineering are other terms being used.
Environmental engineering deals with treatment of chemical, biological, or thermal wastes, purification of water and air, and remediation of contaminated sites after waste disposal or accidental contamination. Among the topics covered by environmental engineering are pollutant transport, water purification, waste water treatment, air pollution, solid waste treatment, recycling, and hazardous waste management. Environmental engineers administer pollution reduction, green engineering, and industrial ecology. Environmental engineers also compile information on environmental consequences of proposed actions.
Forensic engineering
Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents.
Geotechnical engineering
Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering.
Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists, Geological Engineering professionals and soil scientists.
Materials science and engineering
Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers.
Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis.
Site development and planning
Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges.
Structural engineering
Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering.
Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructibility, safety, aesthetics and sustainability.
Surveying
Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerization, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures.
Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction.
Land surveying
In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying. They collect data on important geological features below and on the land.
Construction surveying
Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks:
Surveying existing conditions of the future work site, including topography, existing buildings and infrastructure, and underground infrastructure when possible;
"lay-out" or "setting-out": placing reference points and markers that will guide the construction of new structures such as roads or buildings;
Verifying the location of structures during construction;
As-Built surveying: a survey conducted at the end of the construction project to verify that the work authorized was completed to the specifications set on plans.
Transportation engineering
Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management.
Municipal or urban engineering
Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimization of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.)
Water resources engineering
Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline, it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows into, through, or out of a facility. However, the actual design of the facility may be left to other engineers.
Hydraulic engineering concerns the flow and conveyance of fluids, principally water. This area of civil engineering is intimately related to the design of pipelines, water supply network, drainage facilities (including bridges, dams, channels, culverts, levees, storm sewers), and canals. Hydraulic engineers design these facilities using the concepts of fluid pressure, fluid statics, fluid dynamics, and hydraulics, among others.
Civil engineering systems
Civil engineering systems is a discipline that promotes using systems thinking to manage complexity and change in civil engineering within its broader public context. It posits that the proper development of civil engineering infrastructure requires a holistic, coherent understanding of the relationships between all of the crucial factors that contribute to successful projects while at the same time emphasizing the importance of attention to technical detail. Its purpose is to help integrate the entire civil engineering project life cycle from conception, through planning, designing, making, operating to decommissioning.
| Technology | Technology: General | null |
5778 | https://en.wikipedia.org/wiki/Cave | Cave | A cave or cavern is a natural void under the Earth's surface. Caves often form by the weathering of rock and often extend deep underground. Exogene caves are smaller openings that extend a relatively short distance underground (such as rock shelters). Caves which extend further underground than the opening is wide are called endogene caves.
Speleology is the science of exploration and study of all aspects of caves and the cave environment. Visiting or exploring caves for recreation may be called caving, potholing, or spelunking.
Formation types
The formation and development of caves is known as speleogenesis; it can occur over the course of millions of years. Caves can range widely in size, and are formed by various geological processes. These may involve a combination of chemical processes, erosion by water, tectonic forces, microorganisms, pressure, and atmospheric influences. Isotopic dating techniques can be applied to cave sediments, to determine the timescale of the geological events which formed and shaped present-day caves.
It is estimated that a cave cannot be more than vertically beneath the surface due to the pressure of overlying rocks. This does not, however, impose a maximum depth for a cave which is measured from its highest entrance to its lowest point, as the amount of rock above the lowest point is dependent on the topography of the landscape above it. For karst caves the maximum depth is determined on the basis of the lower limit of karst forming processes, coinciding with the base of the soluble carbonate rocks. Most caves are formed in limestone by dissolution.
Caves can be classified in various other ways as well, including a contrast between active and relict: active caves have water flowing through them; relict caves do not, though water may be retained in them. Types of active caves include inflow caves ("into which a stream sinks"), outflow caves ("from which a stream emerges"), and through caves ("traversed by a stream").
Solutional
Solutional caves or karst caves are the most frequently occurring caves. Such caves form in rock that is soluble; most occur in limestone, but they can also form in other rocks including chalk, dolomite, marble, salt, and gypsum. Except for salt caves, solutional caves result when rock is dissolved by natural acid in groundwater that seeps through bedding planes, faults, joints, and comparable features. Over time cracks enlarge to become caves and cave systems.
The largest and most abundant solutional caves are located in limestone. Limestone dissolves under the action of rainwater and groundwater charged with H2CO3 (carbonic acid) and naturally occurring organic acids. The dissolution process produces a distinctive landform known as karst, characterized by sinkholes and underground drainage. Limestone caves are often adorned with calcium carbonate formations produced through slow precipitation. These include flowstones, stalactites, stalagmites, helictites, soda straws and columns. These secondary mineral deposits in caves are called speleothems.
The portions of a solutional cave that are below the water table or the local level of the groundwater will be flooded.
Lechuguilla Cave in New Mexico and nearby Carlsbad Cavern are now believed to be examples of another type of solutional cave. They were formed by H2S (hydrogen sulfide) gas rising from below, where reservoirs of oil give off sulfurous fumes. This gas mixes with groundwater and forms H2SO4 (sulfuric acid). The acid then dissolves the limestone from below, rather than from above, by acidic water percolating from the surface.
Primary
Caves formed at the same time as the surrounding rock are called primary caves.
Lava tubes are formed through volcanic activity and are the most common primary caves. As lava flows downhill, its surface cools and solidifies. Hot liquid lava continues to flow under that crust, and if most of it flows out, a hollow tube remains. Such caves can be found in the Canary Islands, Jeju-do, the basaltic plains of Eastern Idaho, and in other places. Kazumura Cave near Hilo, Hawaii is a remarkably long and deep lava tube; it is .
Lava caves include but are not limited to lava tubes. Other caves formed through volcanic activity include rifts, lava molds, open vertical conduits, inflationary, blisters, among others.
Sea or littoral
Sea caves are found along coasts around the world. A special case is littoral caves, which are formed by wave action in zones of weakness in sea cliffs. Often these weaknesses are faults, but they may also be dykes or bedding-plane contacts. Some wave-cut caves are now above sea level because of later uplift. Elsewhere, in places such as Thailand's Phang Nga Bay, solutional caves have been flooded by the sea and are now subject to littoral erosion. Sea caves are generally around in length, but may exceed .
Corrasional or erosional
Corrasional or erosional caves are those that form entirely by erosion by flowing streams carrying rocks and other sediments. These can form in any type of rock, including hard rocks such as granite. Generally there must be some zone of weakness to guide the water, such as a fault or joint. A subtype of the erosional cave is the wind or aeolian cave, carved by wind-born sediments. Many caves formed initially by solutional processes often undergo a subsequent phase of erosional or vadose enlargement where active streams or rivers pass through them.
Glacier
Glacier caves are formed by melting ice and flowing water within and under glaciers. The cavities are influenced by the very slow flow of the ice, which tends to collapse the caves again. Glacier caves are sometimes misidentified as "ice caves", though this latter term is properly reserved for bedrock caves that contain year-round ice formations.
Fracture
Fracture caves are formed when layers of more soluble minerals, such as gypsum, dissolve out from between layers of less soluble rock. These rocks fracture and collapse in blocks of stone.
Talus
Talus caves are formed by the openings among large boulders that have fallen down into a random heap, often at the bases of cliffs. These unstable deposits are called talus or scree, and may be subject to frequent rockfalls and landslides.
Anchialine
Anchialine caves are caves, usually coastal, containing a mixture of freshwater and saline water (usually sea water). They occur in many parts of the world, and often contain highly specialized and endemic fauna.
Physical patterns
Branchwork caves resemble surface dendritic stream patterns; they are made up of passages that join downstream as tributaries. Branchwork caves are the most common of cave patterns and are formed near sinkholes where groundwater recharge occurs. Each passage or branch is fed by a separate recharge source and converges into other higher order branches downstream.
Angular network caves form from intersecting fissures of carbonate rock that have had fractures widened by chemical erosion. These fractures form high, narrow, straight passages that persist in widespread closed loops.
Anastomotic caves largely resemble surface braided streams with their passages separating and then meeting further down drainage. They usually form along one bed or structure, and only rarely cross into upper or lower beds.
Spongework caves are formed when solution cavities are joined by mixing of chemically diverse water. The cavities form a pattern that is three-dimensional and random, resembling a sponge.
Ramiform caves form as irregular large rooms, galleries, and passages. These randomized three-dimensional rooms form from a rising water table that erodes the carbonate rock with hydrogen-sulfide enriched water.
Pit caves (vertical caves, potholes, or simply "pits") consist of a vertical shaft rather than a horizontal cave passage. They may or may not be associated with one of the above structural patterns.
Geographic distribution
Caves are found throughout the world, although the distribution of documented cave system is heavily skewed towards those countries where caving has been popular for many years (such as France, Italy, Australia, the UK, the United States, etc.). As a result, explored caves are found widely in Europe, Asia, North America and Oceania, but are sparse in South America, Africa, and Antarctica.
This is a rough generalization, as large expanses of North America and Asia contain no documented caves, whereas areas such as the Madagascar dry deciduous forests and parts of Brazil contain many documented caves. As the world's expanses of soluble bedrock are researched by cavers, the distribution of documented caves is likely to shift. For example, China, despite containing around half the world's exposed limestone—more than —has relatively few documented caves.
Records and superlatives
The cave system with the greatest total length of surveyed passage is Mammoth Cave in Kentucky, US, at .
The longest surveyed underwater cave, and second longest overall, is Sistema Ox Bel Ha in Yucatán, Mexico at .
The deepest known cave—measured from its highest entrance to its lowest point—is Veryovkina Cave in Abkhazia, Georgia, with a depth of . This was the first cave to be explored to a depth of more than . (The first cave to be descended below was Gouffre Berger in France.) The Sarma and Illyuzia-Mezhonnogo-Snezhnaya caves in Georgia, (, and respectively) are the current second- and third-deepest caves. The deepest outside Georgia is Lamprechtsofen Vogelschacht Weg Schacht in Austria, which is deep.
The deepest vertical shaft in a cave is in Vrtoglavica Cave in Slovenia. The second deepest is Ghar-e-Ghala at in the Parau massif near Kermanshah in Iran.
The deepest reached by a remotely operated underwater vehicle in an underwater cave is , in the Hranice Abyss in the Czech Republic.
The Miao Room is the world's largest known room by volume, with a measured volume of . The largest known room by surface is Sarawak Chamber, in the Gunung Mulu National Park (Miri, Sarawak, Borneo, Malaysia), a sloping, boulder strewn chamber with an area of . The largest room in a show cave is the Salle de la Verna in the French Pyrenees.
The largest passage ever discovered is in the Son Doong Cave in Phong Nha-Kẻ Bàng National Park in Quảng Bình Province, Vietnam. It is in length, high and wide over most of its length, but over high and wide for part of its length.
Five longest surveyed
Mammoth Cave, Kentucky, US
Sistema Ox Bel Ha, Mexico
Sistema Sac Actun/Sistema Dos Ojos, Mexico
Jewel Cave, South Dakota, US
Shuanghedong Cave Network, China
Ecology
Cave-inhabiting animals are often categorized as troglobites (cave-limited species), troglophiles (species that can live their entire lives in caves, but also occur in other environments), trogloxenes (species that use caves, but cannot complete their life cycle fully in caves) and accidentals (animals not in one of the previous categories). Some authors use separate terminology for aquatic forms (for example, stygobites, stygophiles, and stygoxenes).
Of these animals, the troglobites are perhaps the most unusual organisms. Troglobitic species often show a number of characteristics, termed troglomorphic, associated with their adaptation to subterranean life. These characteristics may include a loss of pigment (often resulting in a pale or white coloration), a loss of eyes (or at least of optical functionality), an elongation of appendages, and an enhancement of other senses (such as the ability to sense vibrations in water). Aquatic troglobites (or stygobites), such as the endangered Alabama cave shrimp, live in bodies of water found in caves and get nutrients from detritus washed into their caves and from the feces of bats and other cave inhabitants. Other aquatic troglobites include cave fish, and cave salamanders such as the olm and the Texas blind salamander.
Cave insects such as Oligaphorura (formerly Archaphorura) schoetti are troglophiles, reaching in length. They have extensive distribution and have been studied fairly widely. Most specimens are female, but a male specimen was collected from St Cuthberts Swallet in 1969.
Bats, such as the gray bat and Mexican free-tailed bat, are trogloxenes and are often found in caves; they forage outside of the caves. Some species of cave crickets are classified as trogloxenes, because they roost in caves by day and forage above ground at night.
Because of the fragility of cave ecosystems, and the fact that cave regions tend to be isolated from one another, caves harbor a number of endangered species, such as the Tooth cave spider, liphistius trapdoor spider, and the gray bat.
Caves are visited by many surface-living animals, including humans. These are usually relatively short-lived incursions, due to the lack of light and sustenance.
Cave entrances often have typical florae. For instance, in the eastern temperate United States, cave entrances are most frequently (and often densely) populated by the bulblet fern, Cystopteris bulbifera.
Archaeological and cultural importance
People have made use of caves throughout history. The earliest human fossils found in caves come from a series of caves near Krugersdorp and Mokopane in South Africa. The cave sites of Sterkfontein, Swartkrans, Kromdraai B, Drimolen, Malapa, Cooper's D, Gladysvale, Gondolin and Makapansgat have yielded a range of early human species dating back to between three and one million years ago, including Australopithecus africanus, Australopithecus sediba and Paranthropus robustus. However, it is not generally thought that these early humans were living in the caves, but that they were brought into the caves by carnivores that had killed them.
The first early hominid ever found in Africa, the Taung Child in 1924, was also thought for many years to come from a cave, where it had been deposited after being predated on by an eagle. However, this is now debated (Hopley et al., 2013; Am. J. Phys. Anthrop.). Caves do form in the dolomite of the Ghaap Plateau, including the Early, Middle and Later Stone Age site of Wonderwerk Cave; however, the caves that form along the escarpment's edge, like that hypothesised for the Taung Child, are formed within a secondary limestone deposit called tufa. There is numerous evidence for other early human species inhabiting caves from at least one million years ago in different parts of the world, including Homo erectus in China at Zhoukoudian, Homo rhodesiensis in South Africa at the Cave of Hearths (Makapansgat), Homo neanderthalensis and Homo heidelbergensis in Europe at Archaeological Site of Atapuerca, Homo floresiensis in Indonesia, and the Denisovans in southern Siberia.
In southern Africa, early modern humans regularly used sea caves as shelter starting about 180,000 years ago when they learned to exploit the sea for the first time. The oldest known site is PP13B at Pinnacle Point. This may have allowed rapid expansion of humans out of Africa and colonization of areas of the world such as Australia by 60–50,000 years ago. Throughout southern Africa, Australia, and Europe, early modern humans used caves and rock shelters as sites for rock art, such as those at Giant's Castle. Caves such as the yaodong in China were used for shelter; other caves were used for burials (such as rock-cut tombs), or as religious sites (such as Buddhist caves). Among the known sacred caves are China's Cave of a Thousand Buddhas and the sacred caves of Crete.
Paleolithic cave paintings have been found throughout the world dating from 64,800 years old for non-figurative art and 43,900 years old for figurative art.
Caves and acoustics
The importance of sound in caves predates a modern understanding of acoustics. Archaeologists have uncovered relationships between paintings of dots and lines, in specific areas of resonance, within the caves of Spain and France, as well as instruments depicting paleolithic motifs, indicators of musical events and rituals. Clusters of paintings were often found in areas with notable acoustics, sometimes even replicating the sounds of the animals depicted on the walls. The human voice was also theorized to be used as an echolocation device to navigate darker areas of the caves where torches were less useful. Dots of red ochre are often found in spaces with the highest resonance, where the production of paintings was too difficult.
Caves continue to provide usage for modern-day explorers of acoustics. Today Cumberland Caverns provides one of the best examples for modern musical usages of caves. Not only are the caves utilized for reverberation, but for the dampening qualities of their abnormal faces as well. The irregularities in the walls of the Cumberland Caverns diffuse sounds bouncing off the walls and give the space an almost recording studio-like quality. During the 20th century musicians began to explore the possibility of using caves as locations as clubs and concert halls, including the likes of Dinah Shore, Roy Acuff, and Benny Goodman. Unlike today, these early performances were typically held in the mouths of the caves, as the lack of technology made depths of the interior inaccessible with musical equipment. In Luray Caverns, Virginia, a functioning organ has been developed that generates sound by mallets striking stalactites, each with a different pitch.
| Physical sciences | Caves | null |
5781 | https://en.wikipedia.org/wiki/Chinese%20numerals | Chinese numerals | Chinese numerals are words and characters used to denote numbers in written Chinese.
Today, speakers of Chinese languages use three written numeral systems: the system of Arabic numerals used worldwide, and two indigenous systems. The more familiar indigenous system is based on Chinese characters that correspond to numerals in the spoken language. These may be shared with other languages of the Chinese cultural sphere such as Korean, Japanese, and Vietnamese. Most people and institutions in China primarily use the Arabic or mixed Arabic-Chinese systems for convenience, with traditional Chinese numerals used in finance, mainly for writing amounts on cheques, banknotes, some ceremonial occasions, some boxes, and on commercials.
The other indigenous system consists of the Suzhou numerals, or huama, a positional system, the only surviving form of the rod numerals. These were once used by Chinese mathematicians, and later by merchants in Chinese markets, such as those in Hong Kong until the 1990s, but were gradually supplanted by Arabic numerals.
Basic counting in Chinese
The Chinese character numeral system consists of the Chinese characters used by the Chinese written language to write spoken numerals. Similar to spelling-out numbers in English (e.g., "one thousand nine hundred forty-five"), it is not an independent system per se. Since it reflects spoken language, it does not use the positional system as in Arabic numerals, in the same way that spelling out numbers in English does not.
Ordinary numerals
There are characters representing the numbers zero through nine, and other characters representing larger numbers such as tens, hundreds, thousands, ten thousands and hundred millions. There are two sets of characters for Chinese numerals: one for everyday writing, known as (), and one for use in commercial, accounting or financial contexts, known as ( or 'capital numbers'). The latter were developed by Wu Zetian () and were further refined by the Hongwu Emperor (). They arose because the characters used for writing numerals are geometrically simple, so simply using those numerals cannot prevent forgeries in the same way spelling numbers out in English would. A forger could easily change the everyday characters (30) to (5000) just by adding a few strokes. That would not be possible when writing using the financial characters (30) and (5000). They are also referred to as "banker's numerals" of "anti-fraud numerals". For the same reason, rod numerals were never used in commercial records.
Regional usage
Powers of 10
Large numbers
For numbers larger than 10,000, similarly to the long and short scales in the West, there have been four systems in ancient and modern usage. The original one, with unique names for all powers of ten up to the 14th, is ascribed to the Yellow Emperor in the 6th century book by Zhen Luan, . In modern Chinese, only the second system is used, in which the same ancient names are used, but each represents a myriad, times the previous:
In practice, this situation does not lead to ambiguity, with the exception of , which means 1012 according to the system in common usage throughout the Chinese communities as well as in Japan and Korea, but has also been used for 106 in recent years (especially in mainland China for megabyte). To avoid problems arising from the ambiguity, the PRC government never uses this character in official documents, but uses ) or instead. Partly due to this, combinations of and are often used instead of the larger units of the traditional system as well, for example instead of . The ROC government in Taiwan uses to mean 1012 in official documents.
Large numbers from Buddhism
Numerals beyond zǎi come from Buddhist texts in Sanskrit, but are mostly found in ancient texts. Some of the following words are still being used today, but may have transferred meanings.
Small numbers
The following are characters used to denote small order of magnitude in Chinese historically. With the introduction of SI units, some of them have been incorporated as SI prefixes, while the rest have fallen into disuse.
Small numbers from Buddhism
SI prefixes
In the People's Republic of China, the early translation for the SI prefixes in 1981 was different from those used today. The larger (, , , , ) and smaller Chinese numerals (, , , , ) were defined as translation for the SI prefixes as mega, giga, tera, peta, exa, micro, nano, pico, femto, atto, resulting in the creation of yet more values for each numeral.
The Republic of China (Taiwan) defined as the translation for mega and as the translation for tera. This translation is widely used in official documents, academic communities, informational industries, etc. However, the civil broadcasting industries sometimes use to represent "megahertz".
Today, the governments of both China and Taiwan use phonetic transliterations for the SI prefixes. However, the governments have each chosen different Chinese characters for certain prefixes. The following table lists the two different standards together with the early translation.
Reading and transcribing numbers
Whole numbers
Multiple-digit numbers are constructed using a multiplicative principle; first the digit itself (from 1 to 9), then the place (such as 10 or 100); then the next digit.
In Mandarin, the multiplier (liǎng) is often used rather than for all numbers 200 and greater with the "2" numeral (although as noted earlier this varies from dialect to dialect and person to person). Use of both or are acceptable for the number 200. When writing in the Cantonese dialect, is used to represent the "2" numeral for all numbers. In the southern Min dialect of Chaozhou (Teochew), (no6) is used to represent the "2" numeral in all numbers from 200 onwards. Thus:
For the numbers 11 through 19, the leading 'one' () is usually omitted. In some dialects, like Shanghainese, when there are only two significant digits in the number, the leading 'one' and the trailing zeroes are omitted. Sometimes, the one before "ten" in the middle of a number, such as 213, is omitted. Thus:
| Mathematics | Basics | null |
5783 | https://en.wikipedia.org/wiki/Computer%20program | Computer program | A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components.
A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using a compiler written for the language. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within an interpreter written for the language.
If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.
If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer.
Example computer program
The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers:
10 INPUT "How many numbers to average?", A
20 FOR I = 1 TO A
30 INPUT "Enter number:", B
40 LET C = C + B
50 NEXT I
60 LET D = C/A
70 PRINT "The average is", D
80 END
Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.
History
Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically.
Analytical Engine
In 1837, Jacquard's loom inspired Charles Babbage to attempt to build the Analytical Engine.
The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a store which consisted of memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the store were transferred to the mill for processing. The engine was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables. However, the thousands of cogged wheels and gears never fully worked together.
Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program.
Universal Turing machine
In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation.
It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. All present-day computers are Turing complete.
ENIAC
The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied , and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.
Stored-program computers
Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC, which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949.
The IBM System/360 (1964) was a family of computers, each having the same instruction set architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 195 was the most premium. Each System/360 model featured multiprogramming—having multiple processes in memory at once. When one process was waiting for input/output, another could compute.
IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile.
Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.
Very Large Scale Integration
A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube-based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip.
Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips.
Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor.
The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates.
Sac State 8008
The Intel 4004 (1971) was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set.
x86 series
In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are:
Memory instructions to set and access numbers and strings in random-access memory.
Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic operations on integers.
Floating point ALU instructions to perform the primary arithmetic operations on real numbers.
Call stack instructions to push and pop words needed to allocate memory and interface with functions.
Single instruction, multiple data (SIMD) instructions to increase speed when multiple processors are available to perform the same algorithm on an array of data.
Changing programming environment
VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language.
Programming paradigms and languages
Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should:
express ideas directly in the code.
express independent ideas independently.
express relationships among ideas directly in the code.
combine ideas freely.
combine ideas only where combinations make sense.
express simple ideas simply.
The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate:
procedural languages, functional languages, and logical languages.
different levels of data abstraction.
different levels of class hierarchy.
different levels of input datatypes, as in container types and generic programming.
Each of these programming styles has contributed to the synthesis of different programming languages.
A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax.
Keywords are reserved words to form declarations and statements.
Symbols are characters to form operations, assignments, control flow, and delimiters.
Identifiers are words created by programmers to form constants, variable names, structure names, and function names.
Syntax Rules are defined in the Backus–Naur form.
Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem.
Generations of programming language
The evolution of programming languages began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming language.
The first generation of programming language is machine language. Machine language requires the programmer to enter instructions using instruction numbers called machine code. For example, the ADD operation on the PDP-11 has instruction number 24576.
The second generation of programming language is assembly language. Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. An assembler translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code. The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV. Computers also have instructions like DW (Define Word) to reserve memory cells. Then the MOV instruction can copy integers between registers and memory.
The basic structure of an assembly language statement is a label, operation, operand, and comment.
Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses.
Operations allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers.
Operands tell the assembler which data the operation will process.
Comments allow the programmer to articulate a narrative because the instructions alone are vague.
The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.
The third generation of programming language uses compilers and interpreters to execute computer programs. The distinguishing feature of a third generation language is its independence from particular hardware. Early languages include Fortran (1958), COBOL (1959), ALGOL (1960), and BASIC (1964). In 1973, the C programming language emerged as a high-level language that produced efficient machine language instructions. Whereas third-generation languages historically generated many machine instructions for each statement, C has statements that may generate a single machine instruction. Moreover, an optimizing compiler might overrule the programmer and produce fewer machine instructions than statements. Today, an entire paradigm of languages fill the imperative, third generation spectrum.
The fourth generation of programming language emphasizes what output results are desired, rather than how programming statements should be constructed. Declarative languages attempt to limit side effects and allow programmers to write code with relatively few errors. One popular fourth generation language is called Structured Query Language (SQL). Database developers no longer need to process each database record one at a time. Also, a simple statement can generate output records without having to understand how they are retrieved.
Imperative languages
Imperative languages specify a sequential algorithm using declarations, expressions, and statements:
A declaration introduces a variable name to the computer program and assigns it to a datatype – for example: var x: integer;
An expression yields a value – for example: 2 + 2 yields 4
A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something();
Fortran
FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system". It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported:
arrays.
subroutines.
"do" loops.
It succeeded because:
programming and debugging costs were below computer running costs.
it was supported by IBM.
applications at the time were scientific.
However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:
records.
pointers to arrays.
COBOL
COBOL (1959) stands for "COmmon Business Oriented Language". Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.
COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming.
Algol
ALGOL (1960) stands for "ALGOrithmic Language". It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable, structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like:
block structure, where variables were local to their block.
arrays with variable bounds.
"for" loops.
functions.
recursion.
Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch the descendants include C, C++ and Java.
Basic
BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was developed at Dartmouth College for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.
Basic pioneered the interactive session. It offered operating system commands within its environment:
The 'new' command created an empty slate.
Statements evaluated immediately.
Statements could be programmed by preceding them with line numbers.
The 'list' command displayed the program.
The 'run' command executed the program.
However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.
C
C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C". Its purpose was to write the UNIX operating system. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:
inline assembler.
arithmetic on pointers.
pointers to functions.
bit operations.
freely combining complex operators.
C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function.
The global and static data region is located just above the program region. (The program region is technically called the text region. It is where machine instructions are stored.)
The global and static data region is technically two regions. One region is called the initialized data segment, where variables declared with default values are stored. The other region is called the block started by segment, where variables declared without default values are stored.
Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process.
The global and static region stores the global variables that are declared on top of (outside) the main() function. Global variables are visible to main() and every other function in the source code.
On the other hand, variable declarations inside of main(), other functions, or within { } block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of a function definition. Parameters provide an interface to the function.
Local variables declared using the static prefix are also stored in the global and static data region. Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter(){static int counter = 0; counter++; return counter;}
The stack region is a contiguous block of memory located near the top memory address. Variables placed in the stack are populated from top to bottom. A stack pointer is a special-purpose register that keeps track of the last memory address populated. Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction.
Local variables declared without the static prefix, including formal parameter variables, are called automatic variables and are stored in the stack. They are visible inside the function or block and lose their scope upon exiting the function or block.
The heap region is located below the stack. It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks. Like the stack, the addresses of heap variables are set during runtime. An out of memory error occurs when the heap pointer and the stack pointer meet.
C provides the malloc() library function to allocate heap memory. Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack.
C++
In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract data types. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list.
In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class and bound to an identifier, it is called an object.
Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.
Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s.
C++ (1985) was originally called "C with Classes". It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.
An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application:
// grade.h
// -------
// Used to allow multiple source files to include
// this header file without duplication errors.
// ----------------------------------------------
#ifndef GRADE_H
#define GRADE_H
class GRADE {
public:
// This is the constructor operation.
// ----------------------------------
GRADE ( const char letter );
// This is a class variable.
// -------------------------
char letter;
// This is a member operation.
// ---------------------------
int grade_numeric( const char letter );
// This is a class variable.
// -------------------------
int numeric;
};
#endif
A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement.
A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application:
// grade.cpp
// ---------
#include "grade.h"
GRADE::GRADE( const char letter )
{
// Reference the object using the keyword 'this'.
// ----------------------------------------------
this->letter = letter;
// This is Temporal Cohesion
// -------------------------
this->numeric = grade_numeric( letter );
}
int GRADE::grade_numeric( const char letter )
{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
Here is a C++ header file for the PERSON class in a simple school application:
// person.h
// --------
#ifndef PERSON_H
#define PERSON_H
class PERSON {
public:
PERSON ( const char *name );
const char *name;
};
#endif
Here is a C++ source file for the PERSON class in a simple school application:
// person.cpp
// ----------
#include "person.h"
PERSON::PERSON ( const char *name )
{
this->name = name;
}
Here is a C++ header file for the STUDENT class in a simple school application:
// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
// A STUDENT is a subset of PERSON.
// --------------------------------
class STUDENT : public PERSON{
public:
STUDENT ( const char *name );
GRADE *grade;
};
#endif
Here is a C++ source file for the STUDENT class in a simple school application:
// student.cpp
// -----------
#include "student.h"
#include "person.h"
STUDENT::STUDENT ( const char *name ):
// Execute the constructor of the PERSON superclass.
// -------------------------------------------------
PERSON( name )
{
// Nothing else to do.
// -------------------
}
Here is a driver program for demonstration:
// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"
int main( void )
{
STUDENT *student = new STUDENT( "The Student" );
student->grade = new GRADE( 'a' );
std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
return 0;
}
Here is a makefile to compile everything:
# makefile
# --------
all: student_dvr
clean:
rm student_dvr *.o
student_dvr: student_dvr.cpp grade.o student.o person.o
c++ student_dvr.cpp grade.o student.o person.o -o student_dvr
grade.o: grade.cpp grade.h
c++ -c grade.cpp
student.o: student.cpp student.h
c++ -c student.cpp
person.o: person.cpp person.h
c++ -c person.cpp
Declarative languages
Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages.
The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function:
times_10(x) = 10 * x
The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:
times_10(2) = 20
A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack.
Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, a functional language will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what.
A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet:
function max( a, b ){/* code omitted */}
function min( a, b ){/* code omitted */}
function range( a, b, c ) {
return max( a, max( b, c ) ) - min( a, min( b, c ) );
}
The primitives are max() and min(). The driver function is range(). Executing:
put( range( 10, 4, 7) ); will output 6.
Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages.
Lisp
Lisp (1958) stands for "LISt Processor". It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements:
((A B) (HELLO WORLD) 94)
Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x:
cons(head(x), tail(x))
One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns (and may reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding. Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the software development process.
Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible.
ML
ML (1973) stands for "Meta Language". ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer:
ML is not parenthesis-eccentric like Lisp. The following is an application of times_10():
times_10 2
It returns "20 : int". (Both the results and the datatype are returned.)
Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Moreover, ML assigns the datatype of an element at compile-time. Assigning the datatype at compile-time is called static binding. Static binding increases reliability because the compiler checks the context of variables before they are used.
Prolog
Prolog (1972) stands for "PROgramming in LOGic". It is a logic programming language, based on formal logic. The language was developed by Alain Colmerauer and Philippe Roussel in Marseille, France. It is an implementation of Selective Linear Definite clause resolution, pioneered by Robert Kowalski and others at the University of Edinburgh.
The building blocks of a Prolog program are facts and rules. Here is a simple example:
cat(tom). % tom is a cat
mouse(jerry). % jerry is a mouse
animal(X) :- cat(X). % each cat is an animal
animal(X) :- mouse(X). % each mouse is an animal
big(X) :- cat(X). % each cat is big
small(X) :- mouse(X). % each mouse is small
eat(X,Y) :- mouse(X), cheese(Y). % each mouse eats each cheese
eat(X,Y) :- big(X), small(Y). % each big animal eats each small animal
After all the facts and rules are entered, then a question can be asked:
Will Tom eat Jerry?
?- eat(tom,jerry).
true
The following example shows how Prolog will convert a letter grade to its numeric value:
numeric_grade('A', 4).
numeric_grade('B', 3).
numeric_grade('C', 2).
numeric_grade('D', 1).
numeric_grade('F', 0).
numeric_grade(X, -1) :- not X = 'A', not X = 'B', not X = 'C', not X = 'D', not X = 'F'.
grade('The Student', 'A').
?- grade('The Student', X), numeric_grade(X, Y).
X = 'A',
Y = 4
Here is a comprehensive example:
1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:
billows_fire(X) :-
is_a_dragon(X).
2) A creature billows fire if one of its parents billows fire:
billows_fire(X) :-
is_a_creature(X),
is_a_parent_of(Y,X),
billows_fire(Y).
3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:
is_a_parent_of(X, Y):- is_the_mother_of(X, Y).
is_a_parent_of(X, Y):- is_the_father_of(X, Y).
4) A thing is a creature if the thing is a dragon:
is_a_creature(X) :-
is_a_dragon(X).
5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff.
is_a_dragon(norberta).
is_a_creature(puff).
is_the_mother_of(norberta, puff).
Rule (2) is a recursive (inductive) definition. It can be understood declaratively, without the need to understand how it is executed.
Rule (3) shows how functions are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father.
Prolog is an untyped language. Nonetheless, inheritance can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.
Questions are answered using backward reasoning. Given the question:
?- billows_fire(X).
Prolog generates two answers :
X = norberta
X = puff
Practical applications for Prolog are knowledge representation and problem solving in artificial intelligence.
Object-oriented programming
Object-oriented programming is a programming method to execute operations (functions) on objects. The basic idea is to group the characteristics of a phenomenon into an object container and give the container a name. The operations on the phenomenon are also grouped into the container. Object-oriented programming developed by combining the need for containers and the need for safe functional programming. This programming method need not be confined to an object-oriented language. In an object-oriented language, an object container is called a class. In a non-object-oriented language, a data structure (which is also known as a record) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome.
Here is a C programming language header file for the GRADE abstract datatype in a simple school application:
/* grade.h */
/* ------- */
/* Used to allow multiple source files to include */
/* this header file without duplication errors. */
/* ---------------------------------------------- */
#ifndef GRADE_H
#define GRADE_H
typedef struct
{
char letter;
} GRADE;
/* Constructor */
/* ----------- */
GRADE *grade_new( char letter );
int grade_numeric( char letter );
#endif
The grade_new() function performs the same algorithm as the C++ constructor operation.
Here is a C programming language source file for the GRADE abstract datatype in a simple school application:
/* grade.c */
/* ------- */
#include "grade.h"
GRADE *grade_new( char letter )
{
GRADE *grade;
/* Allocate heap memory */
/* -------------------- */
if ( ! ( grade = calloc( 1, sizeof ( GRADE ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
,
,
);
exit( 1 );
}
grade->letter = letter;
return grade;
}
int grade_numeric( char letter )
{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero.
Here is a C programming language header file for the PERSON abstract datatype in a simple school application:
/* person.h */
/* -------- */
#ifndef PERSON_H
#define PERSON_H
typedef struct
{
char *name;
} PERSON;
/* Constructor */
/* ----------- */
PERSON *person_new( char *name );
#endif
Here is a C programming language source file for the PERSON abstract datatype in a simple school application:
/* person.c */
/* -------- */
#include "person.h"
PERSON *person_new( char *name )
{
PERSON *person;
if ( ! ( person = calloc( 1, sizeof ( PERSON ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
,
,
);
exit( 1 );
}
person->name = name;
return person;
}
Here is a C programming language header file for the STUDENT abstract datatype in a simple school application:
/* student.h */
/* --------- */
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
typedef struct
{
/* A STUDENT is a subset of PERSON. */
/* -------------------------------- */
PERSON *person;
GRADE *grade;
} STUDENT;
/* Constructor */
/* ----------- */
STUDENT *student_new( char *name );
#endif
Here is a C programming language source file for the STUDENT abstract datatype in a simple school application:
/* student.c */
/* --------- */
#include "student.h"
#include "person.h"
STUDENT *student_new( char *name )
{
STUDENT *student;
if ( ! ( student = calloc( 1, sizeof ( STUDENT ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
,
,
);
exit( 1 );
}
/* Execute the constructor of the PERSON superclass. */
/* ------------------------------------------------- */
student->person = person_new( name );
return student;
}
Here is a driver program for demonstration:
/* student_dvr.c */
/* ------------- */
#include <stdio.h>
#include "student.h"
int main( void )
{
STUDENT *student = student_new( "The Student" );
student->grade = grade_new( 'a' );
printf( "%s: Numeric grade = %d\n",
/* Whereas a subset exists, inheritance does not. */
student->person->name,
/* Functional programming is executing functions just-in-time (JIT) */
grade_numeric( student->grade->letter ) );
return 0;
}
Here is a makefile to compile everything:
# makefile
# --------
all: student_dvr
clean:
rm student_dvr *.o
student_dvr: student_dvr.c grade.o student.o person.o
gcc student_dvr.c grade.o student.o person.o -o student_dvr
grade.o: grade.c grade.h
gcc -c grade.c
student.o: student.c student.h
gcc -c student.c
person.o: person.c person.h
gcc -c person.c
The formal strategy to build object-oriented objects is to:
Identify the objects. Most likely these will be nouns.
Identify each object's attributes. What helps to describe the object?
Identify each object's actions. Most likely these will be verbs.
Identify the relationships from object to object. Most likely these will be verbs.
For example:
A person is a human identified by a name.
A grade is an achievement identified by a letter.
A student is a person who earns a grade.
Syntax and semantics
The syntax of a computer program is a list of production rules which form its grammar. A programming language's grammar correctly places its declarations, expressions, and statements. Complementing the syntax of a language are its semantics. The semantics describe the meanings attached to various syntactic constructs. A syntactic construct may need a semantic description because a production rule may have an invalid interpretation. Also, different languages might have the same syntax; however, their behaviors may be different.
The syntax of a language is formally described by listing the production rules. Whereas the syntax of a natural language is extremely complicated, a subset of the English language can have this production rule listing:
a sentence is made up of a noun-phrase followed by a verb-phrase;
a noun-phrase is made up of an article followed by an adjective followed by a noun;
a verb-phrase is made up of a verb followed by a noun-phrase;
an article is 'the';
an adjective is 'big' or
an adjective is 'small';
a noun is 'cat' or
a noun is 'mouse';
a verb is 'eats';
The words in bold-face are known as non-terminals. The words in 'single quotes' are known as terminals.
From this production rule listing, complete sentences may be formed using a series of replacements. The process is to replace non-terminals with either a valid non-terminal or a valid terminal. The replacement process repeats until only terminals remain. One valid sentence is:
sentence
noun-phrase verb-phrase
article adjective noun verb-phrase
the adjective noun verb-phrase
the big noun verb-phrase
the big cat verb-phrase
the big cat verb noun-phrase
the big cat eats noun-phrase
the big cat eats article adjective noun
the big cat eats the adjective noun
the big cat eats the small noun
the big cat eats the small mouse
However, another combination results in an invalid sentence:
the small mouse eats the big cat
Therefore, a semantic is necessary to correctly describe the meaning of an eat activity.
One production rule listing method is called the Backus–Naur form (BNF). BNF describes the syntax of a language and itself has a syntax. This recursive definition is an example of a meta-language. The syntax of BNF includes:
::= which translates to is made up of a[n] when a non-terminal is to its right. It translates to is when a terminal is to its right.
| which translates to or.
< and > which surround non-terminals.
Using BNF, a subset of the English language can have this production rule listing:
<sentence> ::= <noun-phrase><verb-phrase>
<noun-phrase> ::= <article><adjective><noun>
<verb-phrase> ::= <verb><noun-phrase>
<article> ::= the
<adjective> ::= big | small
<noun> ::= cat | mouse
<verb> ::= eats
Using BNF, a signed-integer has the production rule listing:
<signed-integer> ::= <sign><integer>
<sign> ::= + | -
<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Notice the recursive production rule:
<integer> ::= <digit> | <digit><integer>
This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a limitation of the number of digits.
Notice the leading zero possibility in the production rules:
<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Therefore, a semantic is necessary to describe that leading zeros need to be ignored.
Two formal methods are available to describe semantics. They are denotational semantics and axiomatic semantics.
Software engineering and computer programming
Software engineering is a variety of techniques to produce quality computer programs. Computer programming is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint.
Performance objectives
The systems analyst has the objective to deliver the right information to the right person at the right time. The critical factors to achieve this objective are:
The quality of the output. Is the output useful for decision-making?
The accuracy of the output. Does it reflect the true situation?
The format of the output. Is the output easily understood?
The speed of the output. Time-sensitive information is important when communicating with the customer in real time.
Cost objectives
Achieving performance objectives should be balanced with all of the costs, including:
Development costs.
Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a limited-use system.
Hardware costs.
Operating costs.
Applying a systems development process will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct.
Waterfall model
The waterfall model is an implementation of a systems development process. As the waterfall label implies, the basic phases overlap each other:
The investigation phase is to understand the underlying problem.
The analysis phase is to understand the possible solutions.
The design phase is to plan the best solution.
The implementation phase is to program the best solution.
The maintenance phase lasts throughout the life of the system. Changes to the system after it is deployed may be necessary. Faults may exist, including specification faults, design faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react to a changing environment.
Computer programmer
A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer. However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way.
Computer programmers may be programming in the small: programming within a single module. Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be programming in the large: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the application programming interface (API).
Program modules
Modular programming is a technique to refine imperative language programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic:
The function of a module is what it does.
The context of a module are the elements being performed upon.
The logic of a module is how it performs the function.
The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not.
The degree of interaction within a module is its level of cohesion. Cohesion is a judgment of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon.
Cohesion
The levels of cohesion from worst to best are:
Coincidental Cohesion: A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, function read_sales_record_print_next_line_convert_to_float(). Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements."
Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, function perform_arithmetic( perform_addition, a, b ).
Temporal Cohesion: A module has temporal cohesion if it performs functions related to time. One example, function initialize_variables_and_open_files(). Another example, stage_one(), stage_two(), ...
Procedural Cohesion: A module has procedural cohesion if it performs multiple loosely related functions. For example, function read_part_number_update_employee_record().
Communicational Cohesion: A module has communicational cohesion if it performs multiple closely related functions. For example, function read_part_number_update_sales_record().
Informational Cohesion: A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level.
Functional Cohesion: a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts.
Coupling
The levels of coupling from worst to best are:
Content Coupling: A module has content coupling if it modifies a local variable of another function. COBOL used to do this with the alter verb.
Common Coupling: A module has common coupling if it modifies a global variable.
Control Coupling: A module has control coupling if another module can modify its control flow. For example, perform_arithmetic( perform_addition, a, b ). Instead, control should be on the makeup of the returned object.
Stamp Coupling: A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level.
Data Coupling: A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object.
Data flow analysis
Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level.
The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules.
Functional categories
Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system, which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. Both application software and system software execute utility programs. At the hardware level, a microcode program controls the circuits throughout the central processing unit.
Application software
Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software.
Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.
The potential advantages of in-house software are features and reports may be developed exactly to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming.
The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.
Application service provider
One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability.
Operating system
An operating system is the low-level software that supports a computer's basic functions, such as scheduling processes and controlling peripherals.
In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times.
The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor.
Kernel Program
The kernel's main purpose is to manage the limited resources of a computer:
The kernel program should perform process scheduling, which is also known as a context switch. The kernel creates a process control block when a computer program is selected for execution. However, an executing program gets exclusive access to the central processing unit only for a time slice. To provide each user with the appearance of continuous access, the kernel quickly preempts each process control block to execute another one. The goal for system developers is to minimize dispatch latency.
The kernel program should perform memory management.
When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion.
The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable.
To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely.
The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address.
The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory.
The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes.
The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files.
The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time.
The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system.
The kernel program should provide system level functions for programmers to use.
Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing.
Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface.
Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface.
The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals.
Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift.
Utility program
A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.
Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses.
Microcode program
A microcode program is the bottom-level interpreter that controls the data path of software-driven computers.
(Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering.
A logic gate is a tiny transistor that can return one of two signals: on or off.
Having one transistor forms the NOT gate.
Connecting two transistors in series forms the NAND gate.
Connecting two transistors in parallel forms the NOR gate.
Connecting a NOT gate to a NAND gate forms the AND gate.
Connecting a NOT gate to a NOR gate forms the OR gate.
These five gates form the building blocks of binary algebra—the digital logic functions of the computer.
Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store.
These hardware-level instructions move data throughout the data path.
The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module.
The final step is to execute the instruction using the hardware module's set of gates.
Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.
Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents.
Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus.
| Technology | Computer software | null |
5793 | https://en.wikipedia.org/wiki/Cumulative%20distribution%20function | Cumulative distribution function | In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .
Every probability distribution supported on the real numbers, discrete or "mixed" as well as continuous, is uniquely identified by a right-continuous monotone increasing function (a càdlàg function) satisfying and .
In the case of a scalar continuous distribution, it gives the area under the probability density function from negative infinity to . Cumulative distribution functions are also used to specify the distribution of multivariate random variables.
Definition
The cumulative distribution function of a real-valued random variable is the function given by
where the right-hand side represents the probability that the random variable takes on a value less than or equal to .
The probability that lies in the semi-closed interval , where , is therefore
In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation.
If treating several random variables etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital for a cumulative distribution function, in contrast to the lower-case used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution uses and instead of and , respectively.
The probability density function of a continuous random variable can be determined from the cumulative distribution function by differentiating using the Fundamental Theorem of Calculus; i.e. given ,
as long as the derivative exists.
The CDF of a continuous random variable can be expressed as the integral of its probability density function as follows:
In the case of a random variable which has distribution having a discrete component at a value ,
If is continuous at , this equals zero and there is no discrete component at .
Properties
Every cumulative distribution function is non-decreasing and right-continuous, which makes it a càdlàg function. Furthermore,
Every function with these three properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable.
If is a purely discrete random variable, then it attains values with probability , and the CDF of will be discontinuous at the points :
If the CDF of a real valued random variable is continuous, then is a continuous random variable; if furthermore is absolutely continuous, then there exists a Lebesgue-integrable function such that
for all real numbers and . The function is equal to the derivative of almost everywhere, and it is called the probability density function of the distribution of .
If has finite L1-norm, that is, the expectation of is finite, then the expectation is given by the Riemann–Stieltjes integral
and for any ,
as well as
as shown in the diagram (consider the areas of the two red rectangles and their extensions to the right or left up to the graph of ). In particular, we have
In addition, the (finite) expected value of the real-valued random variable can be defined on the graph of its cumulative distribution function as illustrated by the drawing in the definition of expected value for arbitrary real-valued random variables.
Examples
As an example, suppose is uniformly distributed on the unit interval .
Then the CDF of is given by
Suppose instead that takes only the discrete values 0 and 1, with equal probability.
Then the CDF of is given by
Suppose is exponential distributed. Then the CDF of is given by
Here λ > 0 is the parameter of the distribution, often called the rate parameter.
Suppose is normal distributed. Then the CDF of is given by
Here the parameter is the mean or expectation of the distribution; and is its standard deviation.
A table of the CDF of the standard normal distribution is often used in statistical applications, where it is named the standard normal table, the unit normal table, or the Z table.
Suppose is binomial distributed. Then the CDF of is given by
Here is the probability of success and the function denotes the discrete probability distribution of the number of successes in a sequence of independent experiments, and is the "floor" under , i.e. the greatest integer less than or equal to .
Derived functions
Complementary cumulative distribution function (tail distribution)
Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the () or simply the or , and is defined as
This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value of the test statistic
In survival analysis, is called the survival function and denoted , while the term reliability function is common in engineering.
Properties
For a non-negative continuous random variable having an expectation, Markov's inequality states that
As , and in fact provided that is finite. Proof: Assuming has a density function , for any Then, on recognizing and rearranging terms, as claimed.
For a random variable having an expectation, and for a non-negative random variable the second term is 0. If the random variable can only take non-negative integer values, this is equivalent to
Folded cumulative distribution
While the plot of a cumulative distribution often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over, that is
where denotes the indicator function and the second summand is the survivor function, thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median, dispersion (specifically, the mean absolute deviation from the median) and skewness of the distribution or of the empirical results.
Inverse distribution function (quantile function)
If the CDF F is strictly increasing and continuous then is the unique real number such that . This defines the inverse distribution function or quantile function.
Some distributions do not have a unique inverse (for example if for all , causing to be constant). In this case, one may use the generalized inverse distribution function, which is defined as
Example 1: The median is .
Example 2: Put . Then we call the 95th percentile.
Some useful properties of the inverse cdf (which are also preserved in the definition of the generalized inverse distribution function) are:
is nondecreasing
if and only if
If has a distribution then is distributed as . This is used in random number generation using the inverse transform sampling-method.
If is a collection of independent -distributed random variables defined on the same sample space, then there exist random variables such that is distributed as and with probability 1 for all .
The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions.
Empirical distribution function
The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.
Multivariate case
Definition for two random variables
When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables , the joint CDF is given by
where the right-hand side represents the probability that the random variable takes on a value less than or equal to and that takes on a value less than or equal to .
Example of joint cumulative distribution function:
For two continuous variables X and Y:
For two discrete random variables, it is beneficial to generate a table of probabilities and address the cumulative probability for each potential range of X and Y, and here is the example:
given the joint probability mass function in tabular form, determine the joint cumulative distribution function.
Solution: using the given table of probabilities for each potential range of X and Y, the joint cumulative distribution function may be constructed in tabular form:
Definition for more than two random variables
For random variables , the joint CDF is given by
Interpreting the random variables as a random vector yields a shorter notation:
Properties
Every multivariate CDF is:
Monotonically non-decreasing for each of its variables,
Right-continuous in each of its variables,
Not every function satisfying the above four properties is a multivariate CDF, unlike in the single dimension case. For example, let for or or and let otherwise. It is easy to see that the above conditions are met, and yet is not a CDF since if it was, then as explained below.
The probability that a point belongs to a hyperrectangle is analogous to the 1-dimensional case:
Complex case
Complex random variable
The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form make no sense. However expressions of the form make sense. Therefore, we define the cumulative distribution of a complex random variables via the joint distribution of their real and imaginary parts:
Complex random vector
Generalization of yields
as definition for the CDS of a complex random vector .
Use in statistical analysis
The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution.
Kolmogorov–Smirnov and Kuiper's tests
The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.
| Mathematics | Probability | null |
5816 | https://en.wikipedia.org/wiki/Cenozoic | Cenozoic | The Cenozoic ( ; ) is Earth's current geological era, representing the last 66million years of Earth's history. It is characterized by the dominance of insects, mammals, birds and angiosperms (flowering plants). It is the latest of three geological eras of the Phanerozoic Eon, preceded by the Mesozoic and Paleozoic. The Cenozoic started with the Cretaceous–Paleogene extinction event, when many species, including the non-avian dinosaurs, became extinct in an event attributed by most experts to the impact of a large asteroid or other celestial body, the Chicxulub impactor.
The Cenozoic is also known as the Age of Mammals because the terrestrial animals that dominated both hemispheres were mammalsthe eutherians (placentals) in the Northern Hemisphere and the metatherians (marsupials, now mainly restricted to Australia and to some extent South America) in the Southern Hemisphere. The extinction of many groups allowed mammals and birds to greatly diversify so that large mammals and birds dominated life on Earth. The continents also moved into their current positions during this era.
The climate during the early Cenozoic was warmer than today, particularly during the Paleocene–Eocene Thermal Maximum. However, the Eocene to Oligocene transition and the Quaternary glaciation dried and cooled Earth.
Nomenclature
Cenozoic derives from the Greek words ( 'new') and ( 'life'). The name was proposed in 1840 by the British geologist John Phillips (1800–1874), who originally spelled it Kainozoic. The era is also known as the Cænozoic, Caenozoic, or Cainozoic ().
In name, the Cenozoic () is comparable to the preceding Mesozoic ('middle life') and Paleozoic ('old life') Eras, as well as to the Proterozoic ('earlier life') Eon.
Divisions
The Cenozoic is divided into three periods: the Paleogene, Neogene, and Quaternary; and seven epochs: the Paleocene, Eocene, Oligocene, Miocene, Pliocene, Pleistocene, and Holocene. The Quaternary Period was officially recognised by the International Commission on Stratigraphy in June 2009. In 2004, the Tertiary Period was officially replaced by the Paleogene and Neogene Periods. The common use of epochs during the Cenozoic helps palaeontologists better organise and group the many significant events that occurred during this comparatively short interval of time. Knowledge of this era is more detailed than any other era because of the relatively young, well-preserved rocks associated with it.
Paleogene
The Paleogene spans from the extinction of non-avian dinosaurs, 66 million years ago, to the dawn of the Neogene, 23.03 million years ago. It features three epochs: the Paleocene, Eocene and Oligocene.
The Paleocene Epoch lasted from 66 million to 56 million years ago. Modern placental mammals originated during this time. The devastation of the K–Pg extinction event included the extinction of large herbivores, which permitted the spread of dense but usually species-poor forests. The Early Paleocene saw the recovery of Earth. The continents began to take their modern shape, but all the continents and the subcontinent of India were separated from each other. Afro-Eurasia was separated by the Tethys Sea, and the Americas were separated by the strait of Panama, as the isthmus had not yet formed. This epoch featured a general warming trend, with jungles eventually reaching the poles. The oceans were dominated by sharks as the large reptiles that had once predominated were extinct. Archaic mammals filled the world such as creodonts (extinct carnivores, unrelated to existing Carnivora).
The Eocene Epoch ranged from 56 million years to 33.9 million years ago. In the Early-Eocene, species living in dense forest were unable to evolve into larger forms, as in the Paleocene. Among them were early primates, whales and horses along with many other early forms of mammals. At the top of the food chains were huge birds, such as Paracrax. Carbon dioxide levels were approximately 1,400 ppm. The temperature was 30 degrees Celsius with little temperature gradient from pole to pole. In the Mid-Eocene, the Antarctic Circumpolar Current between Australia and Antarctica formed. This disrupted ocean currents worldwide and as a result caused a global cooling effect, shrinking the jungles. This allowed mammals to grow to mammoth proportions, such as whales which, by that time, had become almost fully aquatic. Mammals like Andrewsarchus were at the top of the food-chain. The Late Eocene saw the rebirth of seasons, which caused the expansion of savanna-like areas, along with the evolution of grasses. The end of the Eocene was marked by the Eocene–Oligocene extinction event, the European face of which is known as the Grande Coupure.
The Oligocene Epoch spans from 33.9 million to 23.03 million years ago. The Oligocene featured the expansion of grasslands which had led to many new species to evolve, including the first elephants, cats, dogs, marsupials and many other species still prevalent today. Many other species of plants evolved in this period too. A cooling period featuring seasonal rains was still in effect. Mammals still continued to grow larger and larger.
Neogene
The Neogene spans from 23.03 million to 2.58 million years ago. It features two epochs: the Miocene, and the Pliocene.
The Miocene Epoch spans from 23.03 to 5.333 million years ago and is a period in which grasses spread further, dominating a large portion of the world, at the expense of forests. Kelp forests evolved, encouraging the evolution of new species, such as sea otters. During this time, Perissodactyla thrived, and evolved into many different varieties. Apes evolved into 30 species. The Tethys Sea finally closed with the creation of the Arabian Peninsula, leaving only remnants as the Black, Red, Mediterranean and Caspian Seas. This increased aridity. Many new plants evolved: 95% of modern seed plants families were present by the end of the Miocene.
The Pliocene Epoch lasted from 5.333 to 2.58 million years ago. The Pliocene featured dramatic climatic changes, which ultimately led to modern species of flora and fauna. The Mediterranean Sea dried up for several million years (because the ice ages reduced sea levels, disconnecting the Atlantic from the Mediterranean, and evaporation rates exceeded inflow from rivers). Australopithecus evolved in Africa, beginning the human branch. The Isthmus of Panama formed, and animals migrated between North and South America during the great American interchange, wreaking havoc on local ecologies. Climatic changes brought: savannas that are still continuing to spread across the world; Indian monsoons; deserts in central Asia; and the beginnings of the Sahara desert. The world map has not changed much since, save for changes brought about by the glaciations of the Quaternary, such as the Great Lakes, Hudson Bay, and the Baltic Sea.
Quaternary
The Quaternary spans from 2.58 million years ago to present day, and is the shortest geological period in the Phanerozoic Eon. It features modern animals, and dramatic changes in the climate. It is divided into two epochs: the Pleistocene and the Holocene.
The Pleistocene lasted from 2.58 million to 11,700 years ago. This epoch was marked by ice ages as a result of the cooling trend that started in the Mid-Eocene. There were at least four separate glaciation periods marked by the advance of ice caps as far south as 40° N in mountainous areas. Meanwhile, Africa experienced a trend of desiccation which resulted in the creation of the Sahara, Namib, and Kalahari deserts. Many animals evolved including mammoths, giant ground sloths, dire wolves, sabre-toothed cats, and Homo sapiens. 100,000 years ago marked the end of one of the worst droughts in Africa, and led to the expansion of primitive humans. As the Pleistocene drew to a close, a major extinction wiped out much of the world's megafauna, including some of the hominid species, such as Neanderthals. All the continents were affected, but Africa to a lesser extent. It still retains many large animals, such as hippos.
The Holocene began 11,700 years ago and lasts to the present day. All recorded history and "the Human history" lies within the boundaries of the Holocene Epoch. Human activity is blamed for a mass extinction that began roughly 10,000 years ago, though the species becoming extinct have only been recorded since the Industrial Revolution. This is sometimes referred to as the "Sixth Extinction". It is often cited that over 322 recorded species have become extinct due to human activity since the Industrial Revolution, but the rate may be as high as 500 vertebrate species alone, the majority of which have occurred after 1900.
Tectonics
Geologically, the Cenozoic is the era when the continents moved into their current positions. Australia-New Guinea, having split from Pangea during the early Cretaceous, drifted north and, eventually, collided with Southeast Asia; Antarctica moved into its current position over the South Pole; the Atlantic Ocean widened and, later in the era (2.8 million years ago), South America became attached to North America with the isthmus of Panama.
India collided with Asia creating the Himalayas; Arabia collided with Eurasia, closing the Tethys Ocean and creating the Zagros Mountains, around .
The break-up of Gondwana in Late Cretaceous and Cenozoic times led to a shift in the river courses of various large African rivers including the Congo, Niger, Nile, Orange, Limpopo and Zambezi.
Climate
In the Cretaceous, the climate was hot and humid with lush forests at the poles, there was no permanent ice and sea levels were around 300 metres higher than today. This continued for the first 10 million years of the Paleocene, culminating in the Paleocene–Eocene Thermal Maximum about . Around , Earth entered a period of long term cooling. This was mainly due to the collision of India with Eurasia, which caused the rise of the Himalayas: the upraised rocks eroded and reacted with in the air, causing a long-term reduction in the proportion of this greenhouse gas in the atmosphere. Around , permanent ice began to build up on Antarctica. The cooling trend continued in the Miocene, with relatively short warmer periods. When South America became attached to North America creating the Isthmus of Panama around , the Arctic region cooled due to the strengthening of the Humboldt and Gulf Stream currents, eventually leading to the glaciations of the Quaternary ice age, the current interglacial of which is the Holocene Epoch.
Recent analysis of the geomagnetic reversal frequency, oxygen isotope record, and tectonic plate subduction rate, which are indicators of the changes in the heat flux at the core mantle boundary, climate and plate tectonic activity, shows that all these changes indicate similar rhythms on million years' timescale in the Cenozoic Era occurring with the common fundamental periodicity of ~13 Myr during most of the time. The levels of carbonate ions in the ocean fell over the course of the Cenozoic.
Life
Early in the Cenozoic, following the K-Pg event, the planet was dominated by relatively small fauna, including small mammals, birds, reptiles, and amphibians. From a geological perspective, it did not take long for mammals to greatly diversify in the absence of the dinosaurs that had dominated during the Mesozoic. Birds also diversified rapidly; some flightless birds grew larger than humans. These species are sometimes referred to as "terror birds", and were formidable predators. Mammals came to occupy almost every available niche (both marine and terrestrial), and some also grew very large, attaining sizes not seen in most of today's terrestrial mammals. The ranges of many Cenozoic bird clades were governed by latitude and temperature and have contracted over the course of this era as the world cooled.
During the Cenozoic, mammals proliferated from a few small, simple, generalised forms into a diverse collection of terrestrial, marine, and flying animals, giving this period its other name, the Age of Mammals. The Cenozoic is just as much the age of savannas, the age of co-dependent flowering plants and insects, and the age of birds. Grasses also played a very important role in this era, shaping the evolution of the birds and mammals that fed on them. One group that diversified significantly in the Cenozoic as well were the snakes. Evolving in the Cenozoic, the variety of snakes increased tremendously, resulting in many colubrids, following the evolution of their current primary prey source, the rodents.
In the earlier part of the Cenozoic, the world was dominated by the gastornithid birds, terrestrial crocodylians like Pristichampsus, large sharks such as Otodus, and a handful of primitive large mammal groups like uintatheres, mesonychians, and pantodonts. But as the forests began to recede and the climate began to cool, other mammals took over.
The Cenozoic is full of mammals both strange and familiar, including chalicotheres, creodonts, whales, primates, entelodonts, sabre-toothed cats, mastodons and mammoths, three-toed horses, giant rhinoceros like Paraceratherium, the rhinoceros-like brontotheres, various bizarre groups of mammals from South America, such as the vaguely elephant-like pyrotheres and the dog-like marsupial relatives called borhyaenids and the monotremes and marsupials of Australia. Mammal evolution in the Cenozoic was predominantly shaped by climatic and geological processes.
Cenozoic calcareous nannoplankton experienced rapid rates of speciation and reduced species longevity, while suffering prolonged declines in diversity during the Eocene and Neogene. Diatoms, in contrast, experienced major diversification over the Eocene, especially at high latitudes, as the world's oceans cooled. Diatom diversification was particularly concentrated at the Eocene-Oligocene boundary. A second major pulse of diatom diversification occurred over the course of the Middle and Late Miocene.
| Physical sciences | Geological periods | null |
5826 | https://en.wikipedia.org/wiki/Complex%20number | Complex number | In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number is called the , and is called the . The set of complex numbers is denoted by either of the symbols or . Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world.
Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation
has no real solution, because the square of a real number cannot be negative, but has the two nonreal complex solutions and .
Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule along with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field with the real numbers as a subfield. Because of these properties, , and which form is written depends upon convention and style considerations.
The complex numbers also form a real vector space of dimension two, with as a standard basis. This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely some geometric objects and operations can be expressed in terms of complex numbers. For example, the real numbers form the real line, which is pictured as the horizontal axis of the complex plane, while real multiples of are the vertical axis. A complex number can also be defined by its geometric polar coordinates: the radius is called the absolute value of the complex number, while the angle from the positive real axis is called the argument of the complex number. The complex numbers of absolute value one form the unit circle. Adding a fixed complex number to all complex numbers defines a translation in the complex plane, and multiplying by a fixed complex number is a similarity centered at the origin (dilating by the absolute value, and rotating by the argument). The operation of complex conjugation is the reflection symmetry with respect to the real axis.
The complex numbers form a rich structure that is simultaneously an algebraically closed field, a commutative algebra over the reals, and a Euclidean vector space of dimension two.
Definition and basic operations
A complex number is an expression of the form , where and are real numbers, and is an abstract symbol, the so-called imaginary unit, whose meaning will be explained further below. For example, is a complex number.
For a complex number , the real number is called its real part , and the real number (not the complex number ) is its imaginary part. The real part of a complex number is denoted , , or ; the imaginary part is , , or : for example,, .
A complex number can be identified with the ordered pair of real numbers , which may be interpreted as coordinates of a point in a Euclidean plane with standard coordinates, which is then called the complex plane or Argand diagram,. The horizontal axis is generally used to display the real part, with increasing values to the right, and the imaginary part marks the vertical axis, with increasing values upwards.
A real number can be regarded as a complex number , whose imaginary part is 0. A purely imaginary number is a complex number , whose real part is zero. It is common to write , , and ; for example, .
The set of all complex numbers is denoted by (blackboard bold) or (upright bold).
In some disciplines such as electromagnetism and electrical engineering, is used instead of , as frequently represents electric current, and complex numbers are written as or .
Addition and subtraction
Two complex numbers and are added by separately adding their real and imaginary parts. That is to say:
Similarly, subtraction can be performed as
The addition can be geometrically visualized as follows: the sum of two complex numbers and , interpreted as points in the complex plane, is the point obtained by building a parallelogram from the three vertices , and the points of the arrows labeled and (provided that they are not on a line). Equivalently, calling these points , , respectively and the fourth point of the parallelogram the triangles and are congruent.
Multiplication
The product of two complex numbers is computed as follows:
For example,
In particular, this includes as a special case the fundamental formula
This formula distinguishes the complex number i from any real number, since the square of any (negative or positive) real number is always a non-negative real number.
With this definition of multiplication and addition, familiar rules for the arithmetic of rational or real numbers continue to hold for complex numbers. More precisely, the distributive property, the commutative properties (of addition and multiplication) hold. Therefore, the complex numbers form an algebraic structure known as a field, the same way as the rational or real numbers do.
Complex conjugate, absolute value, argument and division
The complex conjugate of the complex number is defined as
It is also denoted by some authors by . Geometrically, is the "reflection" of about the real axis. Conjugating twice gives the original complex number: A complex number is real if and only if it equals its own conjugate. The unary operation of taking the complex conjugate of a complex number cannot be expressed by applying only their basic operations addition, subtraction, multiplication and division.
For any complex number , the product
is a non-negative real number. This allows to define the absolute value (or modulus or magnitude) of z to be the square root
By Pythagoras' theorem, is the distance from the origin to the point representing the complex number z in the complex plane. In particular, the circle of radius one around the origin consists precisely of the numbers z such that . If is a real number, then : its absolute value as a complex number and as a real number are equal.
Using the conjugate, the reciprocal of a nonzero complex number can be computed to be
More generally, the division of an arbitrary complex number by a non-zero complex number equals
This process is sometimes called "rationalization" of the denominator (although the denominator in the final expression may be an irrational real number), because it resembles the method to remove roots from simple expressions in a denominator.
The argument of (sometimes called the "phase" ) is the angle of the radius with the positive real axis, and is written as , expressed in radians in this article. The angle is defined only up to adding integer multiples of , since a rotation by (or 360°) around the origin leaves all points in the complex plane unchanged. One possible choice to uniquely specify the argument is to require it to be within the interval , which is referred to as the principal value.
The argument can be computed from the rectangular form by means of the arctan (inverse tangent) function.
Polar form
For any complex number z, with absolute value and argument , the equation
holds. This identity is referred to as the polar form of z. It is sometimes abbreviated as .
In electronics, one represents a phasor with amplitude and phase in angle notation:
If two complex numbers are given in polar form, i.e., and , the product and division can be computed as
(These are a consequence of the trigonometric identities for the sine and cosine function.)
In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. The picture at the right illustrates the multiplication of
Because the real and imaginary part of are equal, the argument of that number is 45 degrees, or (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula
holds. As the arctan function can be approximated highly efficiently, formulas like this – known as Machin-like formulas – are used for high-precision approximations of :
Powers and roots
The n-th power of a complex number can be computed using de Moivre's formula, which is obtained by repeatedly applying the above formula for the product:
For example, the first few powers of the imaginary unit i are .
The th roots of a complex number are given by
for . (Here is the usual (positive) th root of the positive real number .) Because sine and cosine are periodic, other integer values of do not give other values. For any , there are, in particular n distinct complex n-th roots. For example, there are 4 fourth roots of 1, namely
In general there is no natural way of distinguishing one particular complex th root of a complex number. (This is in contrast to the roots of a positive real number x, which has a unique positive real n-th root, which is therefore commonly referred to as the n-th root of x.) One refers to this situation by saying that the th root is a -valued function of .
Fundamental theorem of algebra
The fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert, states that for any complex numbers (called coefficients) , the equation
has at least one complex solution z, provided that at least one of the higher coefficients is nonzero. This property does not hold for the field of rational numbers (the polynomial does not have a rational root, because is not a rational number) nor the real numbers (the polynomial does not have a real root, because the square of is positive for any real number ).
Because of this fact, is called an algebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below.
There are various proofs of this theorem, by either analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one real root.
History
The solution in radicals (without trigonometric functions) of a general cubic equation, when all three of its roots are real numbers, contains the square roots of negative numbers, a situation that cannot be rectified by factoring aided by the rational root test, if the cubic is irreducible; this is the so-called casus irreducibilis ("irreducible case"). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545 in his Ars Magna, though his understanding was rudimentary; moreover, he later described complex numbers as being "as subtle as they are useless". Cardano did use imaginary numbers, but described using them as "mental torture." This was prior to the use of the graphical complex plane. Cardano and other Italian mathematicians, notably Scipione del Ferro, in the 1500s created an algorithm for solving cubic equations which generally had one real solution and two solutions containing an imaginary number. Because they ignored the answers with the imaginary numbers, Cardano found them useless.
Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root.
Many mathematicians contributed to the development of complex numbers. The rules for addition, subtraction, multiplication, and root extraction of complex numbers were developed by the Italian mathematician Rafael Bombelli. A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions.
The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his Stereometrica he considered, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term in his calculations, which today would simplify to . Negative quantities were not conceived of in Hellenistic mathematics and Hero merely replaced it by its positive
The impetus to study complex numbers as a topic in itself first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (Niccolò Fontana Tartaglia and Gerolamo Cardano). It was soon realized (but proved much later) that these formulas, even if one were interested only in real solutions, sometimes required the manipulation of square roots of negative numbers. In fact, it was proved later that the use of complex numbers is unavoidable when all three roots are real and distinct. However, the general formula can still be used in this case, with some care to deal with the ambiguity resulting from the existence of three cubic roots for nonzero complex numbers. Rafael Bombelli was the first to address explicitly these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic, trying to resolve these issues.
The term "imaginary" for these quantities was coined by René Descartes in 1637, who was at pains to stress their unreal nature:
A further source of confusion was that the equation seemed to be capriciously inconsistent with the algebraic identity , which is valid for non-negative real numbers and , and which was also used in complex number calculations with one of , positive and the other negative. The incorrect use of this identity in the case when both and are negative, and the related identity , even bedeviled Leonhard Euler. This difficulty eventually led to the convention of using the special symbol in place of to guard against this mistake. Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout.
In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be re-expressed by the following de Moivre's formula:
In 1748, Euler went further and obtained Euler's formula of complex analysis:
by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities.
The idea of a complex number as a point in the complex plane (above) was first described by Danish–Norwegian mathematician Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's A Treatise of Algebra.
Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Carl Friedrich Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology:
If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness.
In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis.
The English mathematician G.H. Hardy remarked that Gauss was the first mathematician to use complex numbers in "a really confident and scientific way" although mathematicians such as Norwegian Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.
Augustin-Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case.
The common terms used in the theory are chiefly due to the founders. Argand called the direction factor, and the modulus; Cauchy (1821) called the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used for , introduced the term complex number for , and called the norm. The expression direction coefficient, often used for , is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass.
Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others. Important work (including a systematization) in complex multivariate calculus has been started at beginning of the 20th century. Important results have been achieved by Wilhelm Wirtinger in 1927.
Abstract algebraic aspects
While the above low-level definitions, including the addition and multiplication, accurately describes the complex numbers, there are other, equivalent approaches that reveal the abstract algebraic structure of the complex numbers more immediately.
Construction as a quotient field
One approach to is via polynomials, i.e., expressions of the form
where the coefficients are real numbers. The set of all such polynomials is denoted by . Since sums and products of polynomials are again polynomials, this set forms a commutative ring, called the polynomial ring (over the reals). To every such polynomial p, one may assign the complex number , i.e., the value obtained by setting . This defines a function
This function is surjective since every complex number can be obtained in such a way: the evaluation of a linear polynomial at is . However, the evaluation of polynomial at i is 0, since This polynomial is irreducible, i.e., cannot be written as a product of two linear polynomials. Basic facts of abstract algebra then imply that the kernel of the above map is an ideal generated by this polynomial, and that the quotient by this ideal is a field, and that there is an isomorphism
between the quotient ring and . Some authors take this as the definition of .
Accepting that is algebraically closed, because it is an algebraic extension of in this approach, is therefore the algebraic closure of
Matrix representation of complex numbers
Complex numbers can also be represented by matrices that have the form
Here the entries and are real numbers. As the sum and product of two such matrices is again of this form, these matrices form a subring of the ring of matrices.
A simple computation shows that the map
is a ring isomorphism from the field of complex numbers to the ring of these matrices, proving that these matrices form a field. This isomorphism associates the square of the absolute value of a complex number with the determinant of the corresponding matrix, and the conjugate of a complex number with the transpose of the matrix.
The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. The action of the matrix on a vector corresponds to the multiplication of by . In particular, if the determinant is , there is a real number such that the matrix has the form
In this case, the action of the matrix on vectors and the multiplication by the complex number are both the rotation of the angle .
Complex analysis
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example).
Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.
Convergence
The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, , endowed with the metric
is a complete metric space, which notably includes the triangle inequality
for any two complex numbers and .
Complex exponential
Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function , also written , is defined as the infinite series, which can be shown to converge for any z:
For example, is Euler's number . Euler's formula states:
for any real number . This formula is a quick consequence of general basic facts about convergent power series and the definitions of the involved functions as power series. As a special case, this includes Euler's identity
Complex logarithm
For any positive real number t, there is a unique real number x such that . This leads to the definition of the natural logarithm as the inverse
of the exponential function. The situation is different for complex numbers, since
by the functional equation and Euler's identity.
For example, , so both and are possible values for the complex logarithm of .
In general, given any non-zero complex number w, any number z solving the equation
is called a complex logarithm of , denoted . It can be shown that these numbers satisfy
where is the argument defined above, and the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of , log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval . This leads to the complex logarithm being a bijective function taking values in the strip (that is denoted in the above illustration)
If is not a non-positive real number (a positive or a non-real number), the resulting principal value of the complex logarithm is obtained with . It is an analytic function outside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real number , where the principal value is .
Complex exponentiation is defined as
and is multi-valued, except when is an integer. For , for some natural number , this recovers the non-uniqueness of th roots mentioned above. If is real (and an arbitrary complex number), one has a preferred choice of , the real logarithm, which can be used to define a preferred exponential function.
Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy
Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.
Complex sine and cosine
The series defining the real trigonometric functions sine and cosine, as well as the hyperbolic functions sinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such as tangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method of analytic continuation.
Holomorphic functions
A function → is called holomorphic or complex differentiable at a point if the limit
exists (in which case it is denoted by ). This mimics the definition for real differentiable functions, except that all quantities are complex numbers. Loosely speaking, the freedom of approaching in different directions imposes a much stronger condition than being (real) differentiable. For example, the function
is differentiable as a function , but is not complex differentiable.
A real differentiable function is complex differentiable if and only if it satisfies the Cauchy–Riemann equations, which are sometimes abbreviated as
Complex analysis shows some features not apparent in real analysis. For example, the identity theorem asserts that two holomorphic functions and agree if they agree on an arbitrarily small open subset of . Meromorphic functions, functions that can locally be written as with a holomorphic function , still share some of the features of holomorphic functions. Other functions have essential singularities, such as at .
Applications
Complex numbers have applications in many scientific areas, including signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some of these applications are described below.
Complex conjugation is also employed in inversive geometry, a branch of geometry studying reflections more general than ones about a line. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is looked for.
Geometry
Shapes
Three non-collinear points in the plane determine the shape of the triangle . Locating the points in the complex plane, this shape of a triangle may be expressed by complex arithmetic as
The shape of a triangle will remain the same, when the complex plane is transformed by translation or dilation (by an affine transformation), corresponding to the intuitive notion of shape, and describing similarity. Thus each triangle is in a similarity class of triangles with the same shape.
Fractal geometry
The Mandelbrot set is a popular example of a fractal formed on the complex plane. It is defined by plotting every location where iterating the sequence does not diverge when iterated infinitely. Similarly, Julia sets have the same rules, except where remains constant.
Triangles
Every triangle has a unique Steiner inellipse – an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem: Denote the triangle's vertices in the complex plane as , , and . Write the cubic equation , take its derivative, and equate the (quadratic) derivative to zero. Marden's theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse.
Algebraic number theory
As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in . A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to , the algebraic closure of , which also contains all algebraic numbers, has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem.
Another example is the Gaussian integers; that is, numbers of the form , where and are integers, which can be used to classify sums of squares.
Analytic number theory
Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function is related to the distribution of prime numbers.
Improper integrals
In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration.
Dynamic equations
In differential equations, it is common to first find all complex roots of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form . Likewise, in difference equations, the complex roots of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form .
Linear algebra
Since is algebraically closed, any non-empty complex square matrix has at least one (complex) eigenvalue. By comparison, real matrices do not always have real eigenvalues, for example rotation matrices (for rotations of the plane for angles other than 0° or 180°) leave no direction fixed, and therefore do not have any real eigenvalue. The existence of (complex) eigenvalues, and the ensuing existence of eigendecomposition is a useful tool for computing matrix powers and matrix exponentials.
Complex numbers often generalize concepts originally conceived in the real numbers. For example, the conjugate transpose generalizes the transpose, hermitian matrices generalize symmetric matrices, and unitary matrices generalize orthogonal matrices.
In applied mathematics
Control theory
In control theory, systems are often transformed from the time domain to the complex frequency domain using the Laplace transform. The system's zeros and poles are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane.
In the root locus method, it is important whether zeros and poles are in the left or right half planes, that is, have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are
in the right half plane, it will be unstable,
all in the left half plane, it will be stable,
on the imaginary axis, it will have marginal stability.
If a system has zeros in the right half plane, it is a nonminimum phase system.
Signal analysis
Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value of the corresponding is the amplitude and the argument is the phase.
If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex-valued functions of the form
and
where ω represents the angular frequency and the complex number A encodes the phase and amplitude as explained above.
This use is also extended into digital signal processing and digital image processing, which use digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals.
Another example, relevant to the two side bands of amplitude modulation of AM radio, is:
In physics
Electromagnetism and electrical engineering
In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus.
In electrical engineering, the imaginary unit is denoted by , to avoid confusion with , which is generally in use to denote electric current, or, more particularly, , which is generally in use to denote instantaneous electric current.
Because the voltage in an AC circuit is oscillating, it can be represented as
To obtain the measurable quantity, the real part is taken:
The complex-valued signal is called the analytic representation of the real-valued, measurable signal .
Fluid dynamics
In fluid dynamics, complex functions are used to describe potential flow in two dimensions.
Quantum mechanics
The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers.
Relativity
In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.
Characterizations, generalizations and related notions
Algebraic characterization
The field has the following three properties:
First, it has characteristic 0. This means that for any number of summands (all of which equal one).
Second, its transcendence degree over , the prime field of is the cardinality of the continuum.
Third, it is algebraically closed (see above).
It can be shown that any field having these properties is isomorphic (as a field) to For example, the algebraic closure of the field of the -adic number also satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields). Also, is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that contains many proper subfields that are isomorphic to .
Characterization as a topological field
The preceding characterization of describes only the algebraic aspects of That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. contains a subset (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions:
is closed under addition, multiplication and taking inverses.
If and are distinct elements of , then either or is in .
If is any nonempty subset of , then for some in
Moreover, has a nontrivial involutive automorphism (namely the complex conjugation), such that is in for any nonzero in
Any field with these properties can be endowed with a topology by taking the sets as a base, where ranges over the field and ranges over . With this topology is isomorphic as a topological field to
The only connected locally compact topological fields are and This gives another characterization of as a topological field, because can be distinguished from because the nonzero complex numbers are connected, while the nonzero real numbers are not.
Other number systems
The process of extending the field of reals to is an instance of the Cayley–Dickson construction. Applying this construction iteratively to then yields the quaternions, the octonions, the sedenions, and the trigintaduonions. This construction turns out to diminish the structural properties of the involved number systems.
Unlike the reals, is not an ordered field, that is to say, it is not possible to define a relation that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so precludes the existence of an ordering on Passing from to the quaternions loses commutativity, while the octonions (additionally to not being commutative) fail to be associative. The reals, complex numbers, quaternions and octonions are all normed division algebras over . By Hurwitz's theorem they are the only ones; the sedenions, the next step in the Cayley–Dickson construction, fail to have this structure.
The Cayley–Dickson construction is closely related to the regular representation of thought of as an -algebra (an -vector space with a multiplication), with respect to the basis . This means the following: the -linear map
for some fixed complex number can be represented by a matrix (once a basis has been chosen). With respect to the basis , this matrix is
that is, the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of in the 2 × 2 real matrices, it is not the only one. Any matrix
has the property that its square is the negative of the identity matrix: . Then
is also isomorphic to the field and gives an alternative complex structure on This is generalized by the notion of a linear complex structure.
Hypercomplex numbers also generalize and For example, this notion contains the split-complex numbers, which are elements of the ring (as opposed to for complex numbers). In this ring, the equation has four solutions.
The field is the completion of the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on lead to the fields of -adic numbers (for any prime number ), which are thereby analogous to . There are no other nontrivial ways of completing than and by Ostrowski's theorem. The algebraic closures of still carry a norm, but (unlike ) are not complete with respect to it. The completion of turns out to be algebraically closed. By analogy, the field is called -adic complex numbers.
The fields and their finite field extensions, including are called local fields.
| Mathematics | Counting and numbers | null |
5863 | https://en.wikipedia.org/wiki/Copenhagen%20interpretation | Copenhagen interpretation | The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. While "Copenhagen" refers to the Danish city, the use as an "interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails.
Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object except according to the results of its measurement (that is, the Copenhagen interpretation rejects counterfactual definiteness). Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' personal beliefs and other arbitrary mental factors.
Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the difficulty of defining what might count as a measuring device, and the seeming reliance upon classical physics in describing such devices. Still, including all the variations, the interpretation remains one of the most commonly taught.
Background
Starting in 1900, investigations into atomic and subatomic phenomena forced a revision to the basic concepts of classical physics. However, it was not until a quarter-century had elapsed that the revision reached the status of a coherent theory. During the intervening period, now known as the time of the "old quantum theory", physicists worked with approximations and heuristic corrections to classical physics. Notable results from this period include Max Planck's calculation of the blackbody radiation spectrum, Albert Einstein's explanation of the photoelectric effect, Einstein and Peter Debye's work on the specific heat of solids, Niels Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. From 1922 through 1925, this method of heuristic corrections encountered increasing difficulties; for example, the Bohr–Sommerfeld model could not be extended from hydrogen to the next simplest case, the helium atom.
The transition from the old quantum theory to full-fledged quantum physics began in 1925, when Werner Heisenberg presented a treatment of electron behavior based on discussing only "observable" quantities, meaning to Heisenberg the frequencies of light that atoms absorbed and emitted. Max Born then realized that in Heisenberg's theory, the classical variables of position and momentum would instead be represented by matrices, mathematical objects that can be multiplied together like numbers with the crucial difference that the order of multiplication matters. Erwin Schrödinger presented an equation that treated the electron as a wave, and Born discovered that the way to successfully interpret the wave function that appeared in the Schrödinger equation was as a tool for calculating probabilities.
Quantum mechanics cannot easily be reconciled with everyday language and observation, and has often seemed counter-intuitive to physicists, including its inventors. The ideas grouped together as the Copenhagen interpretation suggest a way to think about how the mathematics of quantum theory relates to physical reality.
Origin and use of the term
The 'Copenhagen' part of the term refers to the city of Copenhagen in Denmark. During the mid-1920s, Heisenberg had been an assistant to Bohr at his institute in Copenhagen. Together they helped originate quantum mechanical theory. At the 1927 Solvay Conference, a dual talk by Max Born and Heisenberg declared "we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification." In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930. In the book's preface, Heisenberg wrote:
On the whole, the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics.
The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, and the writings of Bohr and Heisenberg contradict each other on several important issues. It appears that the particular term, with its more definite sense, was coined by Heisenberg around 1955, while criticizing alternative "interpretations" (e.g., David Bohm's) that had been developed. Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy. Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense". In a 1960 review of Heisenberg's book, Bohr's close collaborator Léon Rosenfeld called the term an "ambiguous expression" and suggested it be discarded. However, this did not come to pass, and the term entered widespread use. Bohr's ideas in particular are distinct despite the use of his Copenhagen home in the name of the interpretation.
Principles
There is no uniquely definitive statement of the Copenhagen interpretation. The term encompasses the views developed by a number of scientists and philosophers during the second quarter of the 20th century. This lack of a single, authoritative source that establishes the Copenhagen interpretation is one difficulty with discussing it; another complication is that the philosophical background familiar to Einstein, Bohr, Heisenberg, and contemporaries is much less so to physicists and even philosophers of physics in more recent times. Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics, and Bohr distanced himself from what he considered Heisenberg's more subjective interpretation. Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".
Different commentators and researchers have associated various ideas with the term. Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors. N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced. Mermin described the Copenhagen interpretation as coming in different "versions", "varieties", or "flavors".
Some basic principles generally accepted as part of the interpretation include the following:
Quantum mechanics is intrinsically indeterministic.
The correspondence principle: in the appropriate limit, quantum theory comes to resemble classical physics and reproduces the classical predictions.
The Born rule: the wave function of a system yields probabilities for the outcomes of measurements upon that system.
Complementarity: certain properties cannot be jointly defined for the same system at the same time. In order to talk about a specific property of a system, that system must be considered within the context of a specific laboratory arrangement. Observable quantities corresponding to mutually exclusive laboratory arrangements cannot be predicted together, but the consideration of multiple such mutually exclusive experiments is necessary to characterize a system.
Hans Primas and Roland Omnès give a more detailed breakdown that, in addition to the above, includes the following:
Quantum physics applies to individual objects. The probabilities computed by the Born rule do not require an ensemble or collection of "identically prepared" systems to understand.
The results provided by measuring devices are essentially classical, and should be described in ordinary language. This was particularly emphasized by Bohr, and was accepted by Heisenberg.
Per the above point, the device used to observe a system must be described in classical language, while the system under observation is treated in quantum terms. This is a particularly subtle issue for which Bohr and Heisenberg came to differing conclusions. According to Heisenberg, the boundary between classical and quantum can be shifted in either direction at the observer's discretion. That is, the observer has the freedom to move what would become known as the "Heisenberg cut" without changing any physically meaningful predictions. On the other hand, Bohr argued both systems are quantum in principle, and the object-instrument distinction (the "cut") is dictated by the experimental arrangement. For Bohr, the "cut" was not a change in the dynamical laws that govern the systems in question, but a change in the language applied to them.
During an observation, the system must interact with a laboratory device. When that device makes a measurement, the wave function of the system collapses, irreversibly reducing to an eigenstate of the observable that is registered. The result of this process is a tangible record of the event, made by a potentiality becoming an actuality.
Statements about measurements that are not actually made do not have meaning. For example, there is no meaning to the statement that a photon traversed the upper path of a Mach–Zehnder interferometer unless the interferometer were actually built in such a way that the path taken by the photon is detected and registered.
Wave functions are objective, in that they do not depend upon personal opinions of individual physicists or other such arbitrary influences.
There are some fundamental agreements and disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed, while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process, which could take place within the quantum system.
Another issue of importance where Bohr and Heisenberg disagreed is wave–particle duality. Bohr maintained that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, whereas Heisenberg held that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.
Nature of the wave function
A wave function is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the wave function together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. Generally, Copenhagen-type interpretations deny that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such, or anything more than a theoretical concept.
Probabilities via the Born rule
The Born rule is essential to the Copenhagen interpretation. Formulated by Max Born in 1926, it gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a particle at a given point, when measured, is proportional to the square of the magnitude of the particle's wave function at that point.
Collapse
The concept of wave function collapse postulates that the wave function of a system can change suddenly and discontinuously upon measurement. Prior to a measurement, a wave function involves the various probabilities for the different potential outcomes of that measurement. But when the apparatus registers one of those outcomes, no traces of the others linger. Since Bohr did not view the wavefunction as something physical, he never talks about "collapse". Nevertheless, many physicists and philosophers associate collapse with the Copenhagen interpretation.
Heisenberg spoke of the wave function as representing available knowledge of a system, and did not use the term "collapse", but instead termed it "reduction" of the wave function to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus.
Role of the observer
Because they assert that the existence of an observed value depends upon the intercession of the observer, Copenhagen-type interpretations are sometimes called "subjective". All of the original Copenhagen protagonists considered the process of observation as mechanical and independent of the individuality of the observer. Wolfgang Pauli, for example, insisted that measurement results could be obtained and recorded by "objective registering apparatus". As Heisenberg wrote,
In the 1970s and 1980s, the theory of decoherence helped to explain the appearance of quasi-classical realities emerging from quantum theory, but was insufficient to provide a technical explanation for the apparent wave function collapse.
Completion by hidden variables?
In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regards as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory.
The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'. It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". By contrast, Max Jammer writes "Einstein never proposed a hidden variable theory." Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty.
Acceptance among physicists
During the 1930s and 1940s, views about quantum mechanics attributed to Bohr and emphasizing complementarity became commonplace among physicists. Textbooks of the time generally maintained the principle that the numerical value of a physical quantity is not meaningful or does not exist until it is measured. Prominent physicists associated with Copenhagen-type interpretations have included Lev Landau, Wolfgang Pauli, Rudolf Peierls, Asher Peres, Léon Rosenfeld, and Ray Streater.
Throughout much of the 20th century, the Copenhagen tradition had overwhelming acceptance among physicists. According to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997, the Copenhagen interpretation remained the most widely accepted label that physicists applied to their own views. A similar result was found in a poll conducted in 2011.
Consequences
The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes.
Schrödinger's cat
This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle. Thus a description of the cat during the course of the experiment—having been entangled with the state of a subatomic particle—becomes a "blur" of "living and dead cat." But this cannot be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality." How can the cat be both alive and dead?
In Copenhagen-type views, the wave function reflects our knowledge of the system. The wave function means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive. (Some versions of the Copenhagen interpretation reject the idea that a wave function can be assigned to a physical system that meets the everyday definition of "cat"; in this view, the correct quantum-mechanical description of the cat-and-particle system must include a superselection rule.)
Wigner's friend
"Wigner's friend" is a thought experiment intended to make that of Schrödinger's cat more striking by involving two conscious beings, traditionally known as Wigner and his friend. (In more recent literature, they may also be known as Alice and Bob, per the convention of describing protocols in information theory.) Wigner puts his friend in with the cat. The external observer believes the system is in state . However, his friend is convinced that the cat is alive, i.e. for him, the cat is in the state . How can Wigner and his friend see different wave functions?
In a Heisenbergian view, the answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily (at least according to Heisenberg, though not to Bohr). If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement. Different Copenhagen-type interpretations take different positions as to whether observers can be placed on the quantum side of the cut.
Double-slit experiment
In the basic version of this experiment, a light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). Such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through.
According to Bohr's complementarity principle, light is neither a wave nor a stream of particles. A particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time.
The same experiment has been performed for light, electrons, atoms, and molecules. The extremely small de Broglie wavelength of objects with larger mass makes experiments increasingly difficult, but in general quantum mechanics considers all matter as possessing both particle and wave behaviors.
Einstein–Podolsky–Rosen paradox
This thought experiment involves a pair of particles prepared in what later authors would refer to as an entangled state. In a 1935 paper, Einstein, Boris Podolsky, and Nathan Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "Einstein–Podolsky–Rosen (EPR) criterion of reality", positing that, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity". From this, they inferred that the second particle must have a definite value of position and of momentum prior to either being measured.
Bohr's response to the EPR paper was published in the Physical Review later that same year. He argued that EPR had reasoned fallaciously. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."
Criticism
Incompleteness and indeterminism
Einstein was an early and persistent supporter of objective reality. Bohr and Heisenberg advanced the position that no physical property could be understood without an act of measurement, while Einstein refused to accept this. Abraham Pais recalled a walk with Einstein when the two discussed quantum mechanics: "Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it." While Einstein did not doubt that quantum mechanics was a correct physical theory in that it gave correct predictions, he maintained that it could not be a complete theory. The most famous product of his efforts to argue the incompleteness of quantum theory is the Einstein–Podolsky–Rosen thought experiment, which was intended to show that physical properties like position and momentum have values even if not measured. The argument of EPR was not generally persuasive to other physicists.
Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist". Instead, he suggested that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."
Einstein was likewise dissatisfied with the indeterminism of quantum theory. Regarding the possibility of randomness in nature, Einstein said that he was "convinced that He [God] does not throw dice." Bohr, in response, reputedly said that "it cannot be for us to tell God, how he is to run the world".
The Heisenberg cut
Much criticism of Copenhagen-type interpretations has focused on the need for a classical domain where observers or measuring devices can reside, and the imprecision of how the boundary between quantum and classical might be defined. This boundary came to be termed the Heisenberg cut (while John Bell derisively called it the "shifty split"). As typically portrayed, Copenhagen-type interpretations involve two different kinds of time evolution for wave functions, the deterministic flow according to the Schrödinger equation and the probabilistic jump during measurement, without a clear criterion for when each kind applies. Why should these two different processes exist, when physicists and laboratory equipment are made of the same matter as the rest of the universe? And if there is somehow a split, where should it be placed? Steven Weinberg writes that the traditional presentation gives "no way to locate the boundary between the realms in which [...] quantum mechanics does or does not apply."
The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe. How does an observer stand outside the universe in order to measure it, and who was there to observe the universe in its earliest stages? Advocates of Copenhagen-type interpretations have disputed the seriousness of these objections. Rudolf Peierls noted that "the observer does not have to be contemporaneous with the event"; for example, we study the early universe through the cosmic microwave background, and we can apply quantum mechanics to that just as well as to any electromagnetic field. Likewise, Asher Peres argued that physicists are, conceptually, outside those degrees of freedom that cosmology studies, and applying quantum mechanics to the radius of the universe while neglecting the physicists in it is no different from quantizing the electric current in a superconductor while neglecting the atomic-level details.
Alternatives
A large number of alternative interpretations have appeared, sharing some aspects of the Copenhagen interpretation while providing alternatives to other aspects.
The ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". More recently, interpretations inspired by quantum information theory like QBism and relational quantum mechanics have appeared. Experts on quantum foundational issues continue to favor the Copenhagen interpretation over other alternatives. Physicists who have suggested that the Copenhagen tradition needs to be built upon or extended include Rudolf Haag and Anton Zeilinger.
Under realism and determinism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many-worlds interpretation results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. The transactional interpretation is also explicitly nonlocal.
Some physicists espoused views in the "Copenhagen spirit" and then went on to advocate other interpretations. For example, David Bohm and Alfred Landé both wrote textbooks that put forth ideas in the Bohr–Heisenberg tradition, and later promoted nonlocal hidden variables and an ensemble interpretation respectively. John Archibald Wheeler began his career as an "apostle of Niels Bohr"; he then supervised the PhD thesis of Hugh Everett that proposed the many-worlds interpretation. After supporting Everett's work for several years, he began to distance himself from the many-worlds interpretation in the 1970s. Late in life, he wrote that while the Copenhagen interpretation might fairly be called "the fog from the north", it "remains the best interpretation of the quantum that we have".
Other physicists, while influenced by the Copenhagen tradition, have expressed frustration at how it took the mathematical formalism of quantum theory as given, rather than trying to understand how it might arise from something more fundamental. (E. T. Jaynes described the mathematical formalism of quantum physics as "a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up together by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble".) This dissatisfaction has motivated new interpretative variants as well as technical work in quantum foundations.
| Physical sciences | Quantum mechanics | Physics |
5869 | https://en.wikipedia.org/wiki/Category%20theory | Category theory | Category theory is a general theory of mathematical structures and their relations. It was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, many constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality.
Many areas of computer science also rely on category theory, such as functional programming and semantics.
A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. Metaphorically, a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one. Morphism composition has similar properties as function composition (associativity and existence of an identity morphism for each object). Morphisms are often some sort of functions, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid.
The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories and : it maps objects of to objects of and morphisms of to morphisms of in such a way that sources are mapped to sources, and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors.
Categories, objects, and morphisms
Categories
A category consists of the following three mathematical entities:
A class , whose elements are called objects;
A class , whose elements are called morphisms or maps or arrows. Each morphism has a source object and target object .The expression would be verbally stated as " is a morphism from to ".The expression – alternatively expressed as , , or – denotes the hom-class of all morphisms from to .
A binary operation , called composition of morphisms, such that for any three objects , , and , we haveThe composition of and is written as or , governed by two axioms:
Associativity: If , , and then
Identity: For every object , there exists a morphism (also denoted as ) called the identity morphism for , such that for every morphism , we haveFrom the axioms, it can be proved that there is exactly one identity morphism for every object.
Examples
The category Set
As the class of objects , we choose the class of all sets.
As the class of morphisms , we choose the class of all functions. Therefore, for two objects and , i.e. sets, we have to be the class of all functions such that .
The composition of morphisms is simply the usual function composition, i.e. for two morphisms and , we have , , which is obviously associative. Furthermore, for every object we have the identity morphism to be the identity map , on
Morphisms
Relations among morphisms (such as ) are often depicted using commutative diagrams, with "points" (corners) representing objects and "arrows" representing morphisms.
Morphisms can have any of the following properties. A morphism is:
a monomorphism (or monic) if implies for all morphisms .
an epimorphism (or epic) if implies for all morphisms .
a bimorphism if f is both epic and monic.
an isomorphism if there exists a morphism such that .
an endomorphism if . end(a) denotes the class of endomorphisms of a.
an automorphism if f is both an endomorphism and an isomorphism. aut(a) denotes the class of automorphisms of a.
a retraction if a right inverse of f exists, i.e. if there exists a morphism with .
a section if a left inverse of f exists, i.e. if there exists a morphism with .
Every retraction is an epimorphism, and every section is a monomorphism. Furthermore, the following three statements are equivalent:
f is a monomorphism and a retraction;
f is an epimorphism and a section;
f is an isomorphism.
Functors
Functors are structure-preserving maps between categories. They can be thought of as morphisms in the category of all (small) categories.
A (covariant) functor F from a category C to a category D, written , consists of:
for each object x in C, an object F(x) in D; and
for each morphism in C, a morphism in D,
such that the following two properties hold:
For every object x in C, ;
For all morphisms and , .
A contravariant functor is like a covariant functor, except that it "turns morphisms around" ("reverses all the arrows"). More specifically, every morphism in C must be assigned to a morphism in D. In other words, a contravariant functor acts as a covariant functor from the opposite category Cop to D.
Natural transformations
A natural transformation is a relation between two functors. Functors often describe "natural constructions" and natural transformations then describe "natural homomorphisms" between two such constructions. Sometimes two quite different constructions yield "the same" result; this is expressed by a natural isomorphism between the two functors.
If F and G are (covariant) functors between the categories C and D, then a natural transformation η from F to G associates to every object X in C a morphism in D such that for every morphism in C, we have ; this means that the following diagram is commutative:
The two functors F and G are called naturally isomorphic if there exists a natural transformation from F to G such that ηX is an isomorphism for every object X in C.
Other concepts
Universal constructions, limits, and colimits
Using the language of category theory, many areas of mathematical study can be categorized. Categories include sets, groups and topologies.
Each category is distinguished by properties that all its objects have in common, such as the empty set or the product of two topologies, yet in the definition of a category, objects are considered atomic, i.e., we do not know whether an object A is a set, a topology, or any other abstract concept. Hence, the challenge is to define special objects without referring to the internal structure of those objects. To define the empty set without referring to elements, or the product topology without referring to open sets, one can characterize these objects in terms of their relations to other objects, as given by the morphisms of the respective categories. Thus, the task is to find universal properties that uniquely determine the objects of interest.
Numerous important constructions can be described in a purely categorical way if the category limit can be developed and dualized to yield the notion of a colimit.
Equivalent categories
It is a natural question to ask: under which conditions can two categories be considered essentially the same, in the sense that theorems about one category can readily be transformed into theorems about the other category? The major tool one employs to describe such a situation is called equivalence of categories, which is given by appropriate functors between two categories. Categorical equivalence has found numerous applications in mathematics.
Further concepts and results
The definitions of categories and functors provide only the very basics of categorical algebra; additional important topics are listed below. Although there are strong interrelations between all of these topics, the given order can be considered as a guideline for further reading.
The functor category DC has as objects the functors from C to D and as morphisms the natural transformations of such functors. The Yoneda lemma is one of the most famous basic results of category theory; it describes representable functors in functor categories.
Duality: Every statement, theorem, or definition in category theory has a dual which is essentially obtained by "reversing all the arrows". If one statement is true in a category C then its dual is true in the dual category Cop. This duality, which is transparent at the level of category theory, is often obscured in applications and can lead to surprising relationships.
Adjoint functors: A functor can be left (or right) adjoint to another functor that maps in the opposite direction. Such a pair of adjoint functors typically arises from a construction defined by a universal property; this can be seen as a more abstract and powerful view on universal properties.
Higher-dimensional categories
Many of the above concepts, especially equivalence of categories, adjoint functor pairs, and functor categories, can be situated into the context of higher-dimensional categories. Briefly, if we consider a morphism between two objects as a "process taking us from one object to another", then higher-dimensional categories allow us to profitably generalize this by considering "higher-dimensional processes".
For example, a (strict) 2-category is a category together with "morphisms between morphisms", i.e., processes which allow us to transform one morphism into another. We can then "compose" these "bimorphisms" both horizontally and vertically, and we require a 2-dimensional "exchange law" to hold, relating the two composition laws. In this context, the standard example is Cat, the 2-category of all (small) categories, and in this example, bimorphisms of morphisms are simply natural transformations of morphisms in the usual sense. Another basic example is to consider a 2-category with a single object; these are essentially monoidal categories. Bicategories are a weaker notion of 2-dimensional categories in which the composition of morphisms is not strictly associative, but only associative "up to" an isomorphism.
This process can be extended for all natural numbers n, and these are called n-categories. There is even a notion of ω-category corresponding to the ordinal number ω.
Higher-dimensional categories are part of the broader mathematical field of higher-dimensional algebra, a concept introduced by Ronald Brown. For a conversational introduction to these ideas, see John Baez, 'A Tale of n-categories' (1996).
Historical notes
Whilst specific examples of functors and natural transformations had been given by Samuel Eilenberg and Saunders Mac Lane in a 1942 paper on group theory, these concepts were introduced in a more general sense, together with the additional notion of categories, in a 1945 paper by the same authors (who discussed applications of category theory to the field of algebraic topology). Their work was an important part of the transition from intuitive and geometric homology to homological algebra, Eilenberg and Mac Lane later writing that their goal was to understand natural transformations, which first required the definition of functors, then categories.
Stanislaw Ulam, and some writing on his behalf, have claimed that related ideas were current in the late 1930s in Poland. Eilenberg was Polish, and studied mathematics in Poland in the 1930s. Category theory is also, in some sense, a continuation of the work of Emmy Noether (one of Mac Lane's teachers) in formalizing abstract processes; Noether realized that understanding a type of mathematical structure requires understanding the processes that preserve that structure (homomorphisms). Eilenberg and Mac Lane introduced categories for understanding and formalizing the processes (functors) that relate topological structures to algebraic structures (topological invariants) that characterize them.
Category theory was originally introduced for the need of homological algebra, and widely extended for the need of modern algebraic geometry (scheme theory). Category theory may be viewed as an extension of universal algebra, as the latter studies algebraic structures, and the former applies to any kind of mathematical structure and studies also the relationships between structures of different nature. For this reason, it is used throughout mathematics. Applications to mathematical logic and semantics (categorical abstract machine) came later.
Certain categories called topoi (singular topos) can even serve as an alternative to axiomatic set theory as a foundation of mathematics. A topos can also be considered as a specific type of category with two additional topos axioms. These foundational applications of category theory have been worked out in fair detail as a basis for, and justification of, constructive mathematics. Topos theory is a form of abstract sheaf theory, with geometric origins, and leads to ideas such as pointless topology.
Categorical logic is now a well-defined field based on type theory for intuitionistic logics, with applications in functional programming and domain theory, where a cartesian closed category is taken as a non-syntactic description of a lambda calculus. At the very least, category theoretic language clarifies what exactly these related areas have in common (in some abstract sense).
Category theory has been applied in other fields as well, see applied category theory. For example, John Baez has shown a link between Feynman diagrams in physics and monoidal categories. Another application of category theory, more specifically topos theory, has been made in mathematical music theory, see for example the book The Topos of Music, Geometric Logic of Concepts, Theory, and Performance by Guerino Mazzola.
More recent efforts to introduce undergraduates to categories as a foundation for mathematics include those of William Lawvere and Rosebrugh (2003) and Lawvere and Stephen Schanuel (1997) and Mirroslav Yotov (2012).
| Mathematics | Other | null |
5872 | https://en.wikipedia.org/wiki/Bradycardia | Bradycardia | Bradycardia, also called bradyarrhythmia, is a resting heart rate under 60 beats per minute (BPM). While bradycardia can result from various pathologic processes, it is commonly a physiologic response to cardiovascular conditioning or due to asymptomatic type 1 atrioventricular block.
Resting heart rates of less than 50 BPM are often normal during sleep in young and healthy adults and athletes. In large population studies of adults without underlying heart disease, resting heart rates of 45–50 BPM appear to be the lower limits of normal, dependent on age and sex. Bradycardia is most likely to be discovered in the elderly, as age and underlying cardiac disease progression contribute to its development.
Bradycardia may be associated with symptoms of fatigue, dyspnea, dizziness, confusion, and frank syncope due to reduced forward blood flow to the brain, lungs, and skeletal muscle. The types of symptoms often depend on the etiology of the slow heart rate, classified by the anatomic location of a dysfunction within the cardiac conduction system. Generally, these classifications involve the broad categories of sinus node dysfunction (SND), atrioventricular block, and other conduction tissue diseases. However, bradycardia can also result without dysfunction of the native conduction system, arising secondary to medications including beta blockers, calcium channel blockers, antiarrythmics, and other cholinergic drugs. Excess vagus nerve activity or carotid sinus hypersensitivity are neurological causes of transient symptomatic bradycardia. Hypothyroidism and metabolic derangements are other common extrinsic causes of bradycardia.
The management of bradycardia is generally reserved for patients with symptoms, regardless of minimum heart rate during sleep or the presence of concomitant heart rhythm abnormalities (See: Sinus pause), which are common with this condition. Untreated SND has been shown to increase the future risk of heart failure and syncope, sometimes warranting definitive treatment with an implanted pacemaker. In atrioventricular causes of bradycardia, permanent pacemaker implantation is often required when no reversible causes of disease are found. In both SND and atrioventricular blocks, there is little role for medical therapy unless a patient is hemodynamically unstable, which may require the use of medications such as atropine and isoproterenol and interventions such as transcutenous pacing until such time that an appropriate workup can be undertaken and long-term treatment selected. While asymptomatic bradycardias rarely require treatment, consultation with a physician is recommended, especially in the elderly.
The term "relative bradycardia" can refer to a heart rate lower than expected in a particular disease state, often a febrile illness. Chronotropic incompetence (CI) refers to an inadequate rise in heart rate during periods of increased demand, often due to exercise, and is an important sign of SND and an indication for pacemaker implantation.
The word "bradycardia" is from the Greek βραδύς bradys "slow", and καρδία kardia "heart".
Normal cardiac conduction
The heart is a specialized muscle containing repeating units of cardiomyocytes, or heart muscle cells. Like most cells, cardiomyocytes maintain a highly regulated negative voltage at rest and are capable of propagating action potentials, much like neurons. While at rest, the negative cellular voltage of a cardiomyocyte can be raised above a certain threshold (so-called depolarization) by an incoming action potential, causing the myocyte to contract. When these contractions occur in a coordinated fashion, the atria and ventricles of the heart will pump, delivering blood to the rest of the body.
Normally, the origination of the action potential causing cardiomyocyte contraction originates from the sinoatrial node (SA node). This collection of specialized conduction tissue is located in the right atrium, near the entrance of the superior vena cava. The SA node contains pacemaker cells that demonstrate "automaticity" and can generate impulses that travel through the heart and create a steady heartbeat.
At the beginning of the cardiac cycle, the SA node generates an electrical action potential that spreads across the right and left atria, causing the atrial contraction of the cardiac cycle. This electrical impulse carries on to the atrioventricular node (AV node), another specialized grouping of cells located in the base of the right atrium, which is the only anatomically normal electrical connection between the atria and ventricles. Impulses coursing through the AV node are slowed before carrying on to the ventricles, allowing for appropriate filling of the ventricles before contraction. The SA and AV nodes are both closely regulated by the autonomic nervous system's fibres, allowing for adjustment of cardiac output by the central nervous system in times of increased metabolic demand.
Following slowed conduction through the atrioventricular node, the action potential produced initially at the SA node now flows through the His-Purkinje system. The bundle of His originates in the AV node and rapidly splits into a left and right branch, each destined for a different ventricle. Finally, these bundle branches terminate in the small Purkinje fibers that innervate myocardial tissue. The His-Purkinje system conducts action potentials much faster than can be propagated between myocardial cells, allowing the entire ventricular myocardium to contract in less time, improving pump function.
Classification
Most pathological causes of bradycardia result from damage to this normal cardiac conduction system at various levels: the sinoatrial node, the atrioventricular node, or damage to conduction tissue between or after these nodes.
Sinus node
Bradycardia caused by the alterations of sinus node activity is divided into three types.
Sinus bradycardia
Sinus bradycardia is a sinus rhythm of less than 50 BPM. Cardiac action potentials are generated from the SA node and propagated through an otherwise normal conduction system, but they occur at a slow rate. It is a common condition found in both healthy individuals and those considered well-conditioned athletes. Studies have found that 50–85% of conditioned athletes have benign sinus bradycardia, as compared to 23% of the general population studied. The heart muscle of athletes has a higher stroke volume, requiring fewer contractions to circulate the same volume of blood. Asymptomatic sinus bradycardia decreases in prevalence with age.
Sinus arrhythmia
Sinus arrhythmias are heart rhythm abnormalities characterized by variations in the cardiac cycle length over 120 milliseconds (longest cycle - shortest cycle). These are the most common type of arrhythmia in the general population and usually have no significant consequences. They typically occur in the young, athletes or after administration of medications such as morphine. The types of sinus arrhythmia are separated into the respiratory and non-respiratory categories.
Respiratory sinus arrhythmia
Respiratory sinus arrhythmia refers to the physiologically normal variation in heart rate due to breathing. During inspiration, vagus nerve activity decreases, reducing parasympathetic innervation of the sinoatrial node and causing an increase in heart rate. During expiration, heart rates fall due to the converse occurring.
Non-respiratory sinus arrhythmia
Non-respiratory causes of sinus arrhythmia include sinus pause, sinus arrest, and sinoatrial exit block. Sinus pause and arrest involve slowing or arresting of automatic impulse generation from the sinus node. This can lead to asystole or cardiac arrest if ventricular escape rhythms do not create backup sources of cardiac action potentials.
Sinoatrial exit block is a similar non-respiratory phenomenon of temporarily lost sinoatrial impulses. However, in contrast to a sinus pause, the action potential is still generated at the SA node but is either unable to leave or delayed from leaving the node, preventing or delaying atrial depolarization and subsequent ventricular systole. Therefore, the length of the pause in heartbeats is usually a multiple of the P-P interval, as seen on electrocardiography. Like a sinus pause, a sinoatrial exit block can be symptomatic, especially with prolonged pause length.
Sinus node dysfunction
A syndrome of intrinsic disease of the sinus node, referred to as sick sinus syndrome or sinus node dysfunction, covers conditions that include symptomatic sinus bradycardia or persistent chronotropic incompetence, sinoatrial block, sinus arrest, and tachycardia-bradycardia syndrome. These conditions can be caused by damage to the native sinus node itself and are frequently accompanied by damaged AV node conduction and reduced backup pacemaker activity. The condition can also be caused by dysfunction of the autonomic nervous system that regulates the node and is commonly exacerbated by medications.
Atrioventricular node
Bradycardia can also result from the inhibition of the flow of action potentials through the atrioventricular (AV) node. While this can be normal in young patients due to excessive vagus nerve tone, symptomatic bradycardia due to AV node dysfunction in older people is commonly due to structural heart disease, myocardial ischemia, or age-related fibrosis.
Atrioventricular block
Atrioventricular blocks are divided into three categories, ranked by severity. AV block is diagnosed via surface ECG, which is usually sufficient to locate the causal lesion of the block without the need for an invasive electrophysiology study.
In 1st degree AV block, electrical impulses originating in the SA node (or other ectopic focus above the ventricles) are conducted with significant delay through the AV node. This condition is diagnosed via ECG, with PR intervals in excess of 200 milliseconds. The PR interval represents the length of time between the start of atrial depolarization and the start of ventricular depolarization, representing the flow of electrical impulses between the SA and AV nodes. Despite the term "block," no impulses are fully lost in this conduction but are merely delayed. The location of the causal lesion can be anywhere between the AV node and the His-Purkinje system but is most commonly found in the AV node itself. Generally, isolated PR prolongation in 1st degree AV block is not associated with increased mortality or hospitalization.
2nd degree AV block is characterized by intermittently lost conduction of impulses between the SA node and the ventricles. 2nd degree block is classified into two types. Mobitz type 1 block, otherwise known by the eponym Wenckebach, classically demonstrates grouped patterns of heartbeats on ECG. Throughout the group, the PR interval gradually lengthens until a dropped conduction occurs, resulting in no QRS complex seen on surface ECG following the last P wave. After a delay, the grouping repeats, with the PR interval shortening again to baseline. Type 1 2nd degree AV block due to disease in the AV node (as opposed to in the His-purkinje system) rarely needs intervention with pacemaker implantation.
2nd degree, Mobitz type 2 AV block is another phenomenon of intermittently dropped QRS complexes after characteristic groupings of beats seen on surface ECG. The PR and RR intervals are consistent in this condition, followed by a sudden AV block and dropped QRS complex. Because type 2 blocks are typically due to lesions below the AV node, the ability for ventricular escape rhythms to maintain cardiac output is compromised. Permanent pacemaker implantation is often required.
Junctional rhythms
An AV-junctional rhythm, or atrioventricular nodal bradycardia, is usually caused by the absence of the electrical impulse from the sinus node. This usually appears on an electrocardiogram with a normal QRS complex accompanied by an inverted P wave either before, during, or after the QRS complex.
An AV-junctional escape beat is a delayed heartbeat originating from an ectopic focus somewhere in the AV junction. It occurs when the rate of depolarization of the SA node falls below the rate of the AV node. This dysrhythmia may also occur when the electrical impulses from the SA node fail to reach the AV node because of SA or AV block. This is a protective mechanism for the heart to compensate for an SA node that is no longer handling the pacemaking activity and is one of a series of backup sites that can take over pacemaker function when the SA node fails to do so. This would present with a longer PR interval. An AV-junctional escape complex is a normal response that may result from excessive vagal tone on the SA node. Pathological causes include sinus bradycardia, sinus arrest, sinus exit block, or AV block.
Ventricular
Idioventricular rhythm, also known as atrioventricular bradycardia or ventricular escape rhythm, is a heart rate of less than 50 BPM. This is a safety mechanism when a lack of electrical impulse or stimuli from the atrium occurs. Impulses originating within or below the bundle of His in the AV node will produce a wide QRS complex with heart rates between 20 and 40 BPM. Those above the bundle of His, also known as junctional, will typically range between 40 and 60 BPM with a narrow QRS complex. In a third-degree heart block, about 61% take place at the bundle branch-Purkinje system, 21% at the AV node, and 15% at the bundle of His. AV block may be ruled out with an ECG indicating "a 1:1 relationship between P waves and QRS complexes." Ventricular bradycardias occurs with sinus bradycardia, sinus arrest, and AV block. Treatment often consists of the administration of atropine and cardiac pacing.
Infantile
For infants, bradycardia is defined as a heart rate less than 100 BPM (normal is around 120–160 BPM). Premature babies are more likely than full-term babies to have apnea and bradycardia spells; their cause is not clearly understood. The spells may be related to centers inside the brain that regulate breathing which may not be fully developed. Touching the baby gently or rocking the incubator slightly will almost always get the baby to start breathing again, which increases the heart rate. The neonatal intensive-care unit standard practice is to electronically monitor the heart and lungs.
Causes
Bradycardia arrhythmia may have many causes, both cardiac and non-cardiac.
Non-cardiac causes are usually secondary and can involve recreational drug use or abuse, metabolic or endocrine issues, especially hypothyroidism, an electrolyte imbalance, neurological factors, autonomic reflexes, situational factors, such as prolonged bed rest, and autoimmunity. At rest, although tachycardia is more commonly seen in fatty acid oxidation disorders, acute bradycardia can occur more rarely.
Cardiac causes include acute or chronic ischemic heart disease, vascular heart disease, valvular heart disease, or degenerative primary electrical disease. Ultimately, the causes act by three mechanisms: depressed automaticity of the heart, conduction block, or escape pacemakers and rhythms.
In general, two types of problems result in bradycardias: disorders of the SA node and disorders of the AV node.
With SA node dysfunction (sometimes called sick sinus syndrome), there may be disordered automaticity or impaired conduction of the impulse from the SA node into the surrounding atrial tissue (an "exit block"). Second-degree sinoatrial blocks can be detected only by use of a 12-lead ECG. It is difficult and sometimes impossible to assign a mechanism to any particular bradycardia, but the underlying mechanism is not clinically relevant to treatment, which is the same in both cases of sick sinus syndrome: a permanent pacemaker.
AV conduction disturbances (AV block; primary AV block, secondary type I AV block, secondary type II AV block, tertiary AV block) may result from impaired conduction in the AV node or anywhere below it, such as in the bundle of His. The clinical relevance pertaining to AV blocks is greater than that of SA blocks.
A variety of medications can induce or exacerbate bradycardia. These include beta blockers like propranolol, calcium channel blockers like verapamil and diltiazem, cardiac glycosides like digoxin, alpha-2 agonists like clonidine, and lithium, among others. Beta blockers may slow the heart rate to a dangerous level if prescribed with calcium channel blockers.
Chronic cocaine use has been associated with bradycardia. Desensitization of β-adrenergic receptors has been suggested as a possible cause of this. In contrast to cocaine however, methamphetamine has not been associated with bradyarrhythmias.
Bradycardia is also part of the mammalian diving reflex.
COVID-19 has been found to be a cause of bradycardia.
Diagnosis
A diagnosis of bradycardia in adults is based on a heart rate of less than 60 BPM, although some studies use a heart rate of less than 50 BPM. This is usually determined either by palpation or ECG. If symptoms occur, a determining electrolytes may help determine the underlying cause.
Many heathy young adults, and particularly well-trained athletes, have sinus bradycardia that is without symptoms. This can include heart rates of less than 50 or 60bpm or even less than 40bpm. Such individuals, without symptoms, do not require treatment.
Temporal correlation of symptoms with bradycardia is necessary for diagnosis of symptomatic bradycardia. This can sometimes be difficult. Challenge with oral theophylline can be used as a diagnostic agent in people with bradycardia caused by sinus node dysfunction (SND) to help correlate symptoms. Theophylline increases resting heart rate and improves subjective symptoms in most people with bradycardia due to SND.
Management
The treatment of bradycardia depends on whether the person is stable or unstable.
Chronic or stable
Emergency treatment is not needed if the person is asymptomatic or minimally symptomatic.
Treatment of chronic symptomatic bradycardia first necessitates correlation of symptoms. Once symptoms have been clearly linked to bradycardia, permanent cardiac pacing can be provided to increase heart rate and symptoms will improve.
In people who are unwilling to undergo pacemaker implantation or are not candidates for cardiac pacing, chronic oral theophylline, an adenosine receptor antagonist, can be considered for treatment of symptomatic bradycardia. Other positive chronotropes have also been used to treat bradycardia, including the vasodilator and antihypertensive agent hydralazine, the alpha-1 blocker prazosin, anticholinergics, and sympathomimetic agents like beta-1 agonists. However, side effects, like orthostatic hypotension with hydralazine, prazosin, and anticholinergics and myocardial toxicity with sympathomimetics, as well as limited data for this indication, hinder their routine and long-term use.
If hypothyroidism is present and is the cause of symptomatic bradycardia, symptoms respond well to replacement therapy with thyroid hormone.
Discontinuation of medications that induce or exacerbate bradycardia, such as beta blockers, calcium channel blockers, sodium channel blockers, and potassium channel blockers, can improve symptoms. If discontinuation of these medications is not possible due to clinical need, cardiac pacing can be considered with continuation of the medications. Beta blockers with intrinsic sympathomimetic activity (i.e., partial agonist activity), like pindolol, have less risk of bradycardia and may be useful as replacements of pure beta blockers, like propranolol, atenolol, and metoprolol.
Acute or unstable
If a person is unstable, the initial recommended treatment is intravenous atropine. Doses less than 0.5 mg should not be used, which may further decrease the rate. If this is ineffective, intravenous inotrope infusion (dopamine, epinephrine) or transcutaneous pacing should be used. Transvenous pacing may be required if the cause of the bradycardia is not rapidly reversible. Methylxanthines like theophylline and aminophylline are also used in the treatment of acute bradycardia due to sinus node dysfunction (SND).
In children, giving oxygen, supporting their breathing, and chest compressions are recommended.
Epidemiology
In clinical practice, elderly people over age 65 and young athletes of both sexes may have sinus bradycardia. The US Centers for Disease Control and Prevention reported in 2011 that 15.2% of adult males and 6.9% of adult females had clinically defined bradycardia (a resting pulse rate below 60 BPM).
Society and culture
Records
Daniel Green holds the world record for the slowest heartbeat in a healthy human, with a heart rate measured in 2014 of 26 BPM.
Martin Brady holds the Guinness world record for the slowest heart rate, with a certified rate over a minute duration of 27 BPM.
During his career, professional cyclist Miguel Indurain had a resting heart rate of 28 BPM.
| Biology and health sciences | Symptoms and signs | Health |
5876 | https://en.wikipedia.org/wiki/Coronary%20artery%20disease | Coronary artery disease | Coronary artery disease (CAD), also called coronary heart disease (CHD), or ischemic heart disease (IHD), is a type of heart disease involving the reduction of blood flow to the cardiac muscle due to a build-up of atheromatous plaque in the arteries of the heart. It is the most common of the cardiovascular diseases. CAD can cause stable angina, unstable angina, myocardial ischemia, and myocardial infarction.
A common symptom is angina, which is chest pain or discomfort which may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. In stable angina, symptoms occur with exercise or emotional stress, last less than a few minutes, and improve with rest. Shortness of breath may also occur and sometimes no symptoms are present. In many cases, the first sign is a heart attack. Other complications include heart failure or an abnormal heartbeat.
Risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, depression, and excessive alcohol consumption. A number of tests may help with diagnosis including: electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, biomarkers (high-sensitivity cardiac troponins) and coronary angiogram, among others.
Ways to reduce CAD risk include eating a healthy diet, regularly exercising, maintaining a healthy weight, and not smoking. Medications for diabetes, high cholesterol, or high blood pressure are sometimes used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets (including aspirin), beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improves life expectancy or decreases heart attack risk.
In 2015, CAD affected 110 million people and resulted in 8.9 million deaths. It makes up 15.6% of all deaths, making it the most common cause of death globally. The risk of death from CAD for a given age decreased between 1980 and 2010, especially in developed countries. The number of cases of CAD for a given age also decreased between 1990 and 2010. In the United States in 2010, about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45; rates were higher among males than females of a given age.
Signs and symptoms
The most common symptom is chest pain or discomfort that occurs regularly with activity, after eating, or at other predictable times; this phenomenon is termed stable angina and is associated with narrowing of the arteries of the heart. Angina also includes chest tightness, heaviness, pressure, numbness, fullness, or squeezing. Angina that changes in intensity, character or frequency is termed unstable. Unstable angina may precede myocardial infarction. In adults who go to the emergency department with an unclear cause of pain, about 30% have pain due to coronary artery disease. Angina, shortness of breath, sweating, nausea or vomiting, and lightheadedness are signs of a heart attack or myocardial infarction, and immediate emergency medical services are crucial.
With advanced disease, the narrowing of coronary arteries reduces the supply of oxygen-rich blood flowing to the heart, which becomes more pronounced during strenuous activities during which the heart beats faster and has an increased oxygen demand. For some, this causes severe symptoms, while others experience no symptoms at all.
Symptoms in females
Symptoms in females can differ from those in males, and the most common symptom reported by females of all races is shortness of breath. Other symptoms more commonly reported by females than males are extreme fatigue, sleep disturbances, indigestion, and anxiety. However, some females experience irregular heartbeat, dizziness, sweating, and nausea. Burning, pain, or pressure in the chest or upper abdomen that can travel to the arm or jaw can also be experienced in females, but females less commonly report it than males. Generally, females experience symptoms 10 years later than males. Females are less likely to recognize symptoms and seek treatment.
Risk factors
Coronary artery disease is characterized by heart problems that result from atherosclerosis. Atherosclerosis is a type of arteriosclerosis which is the "chronic inflammation of the arteries which causes them to harden and accumulate cholesterol plaques (atheromatous plaques) on the artery walls". CAD has several well-determined risk factors that contribute to atherosclerosis. These risk factors for CAD include "smoking, diabetes, high blood pressure (hypertension), abnormal (high) amounts of cholesterol and other fat in the blood (dyslipidemia), type 2 diabetes and being overweight or obese (having excess body fat)" due to lack of exercise and a poor diet. Some other risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, depression, family history, psychological stress and excessive alcohol. About half of cases are linked to genetics. Smoking and obesity are associated with about 36% and 20% of cases, respectively. Smoking just one cigarette per day about doubles the risk of CAD. Lack of exercise has been linked to 7–12% of cases. Exposure to the herbicide Agent Orange may increase risk. Rheumatologic diseases such as rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and psoriatic arthritis are independent risk factors as well.
Job stress appears to play a minor role accounting for about 3% of cases. In one study, females who were free of stress from work life saw an increase in the diameter of their blood vessels, leading to decreased progression of atherosclerosis. In contrast, females who had high levels of work-related stress experienced a decrease in the diameter of their blood vessels and significantly increased disease progression. Having a type A behavior pattern, a group of personality characteristics including time urgency, competitiveness, hostility, and impatience, is linked to an increased risk of coronary disease.
Blood fats
The consumption of different types of fats including trans fat (trans unsaturated), and saturated fat, in a diet "influences the level of cholesterol that is present in the bloodstream". Unsaturated fats originate from plant sources (such as oils). There are two types of unsaturated fats, cis and trans isomers. Cis unsaturated fats are bent in molecular structure and trans are linear in structure. Saturated fats originate from animal sources (such as animal fats) and are also molecularly linear in structure. The linear configurations of unsaturated trans and saturated fats allow them to easily accumulate and stack at the arterial walls when consumed in high amounts (and other positive measures towards physical health are not met).
Fats and cholesterol are insoluble in blood and thus are amalgamated with proteins to form lipoproteins for transport. Low density lipoproteins (LDL) transport cholesterol from the liver to the rest of the body and therefore raise blood cholesterol levels. The consumption of "saturated fats increases LDL levels within the body, thus raising blood cholesterol levels".
High density lipoproteins (HDL) are considered 'good' lipoproteins as they search for excess cholesterol in the body and transport it back to the liver for disposal. Trans fats also "increase LDL levels whilst decreasing HDL levels within the body, significantly raising blood cholesterol levels".
High levels of cholesterol in the bloodstream lead to atherosclerosis. With increased levels of LDL in the bloodstream, "LDL particles will form deposits and accumulate within the arterial walls, which will lead to the development of plaques, restricting blood flow". The resultant reduction in the heart's blood supply due to atherosclerosis in coronary arteries "causes shortness of breath, angina pectoris (chest pains that are usually relieved by rest), and potentially fatal heart attacks (myocardial infarctions)".
Genetics
The heritability of coronary artery disease has been estimated between 40% and 60%. Genome-wide association studies have identified over 160 genetic susceptibility loci for coronary artery disease.
Transcriptome
Several RNA Transcripts associated with CAD - FoxP1, ICOSLG, IKZF4/Eos, SMYD3, TRIM28, and TCF3/E2A are likely markers of regulatory T cells (Tregs), consistent with known reductions in Tregs in CAD.
The RNA changes are mostly related to ciliary and endocytic transcripts, which in the circulating immune system would be related to the immune synapse. One of the most differentially expressed genes, fibromodulin (FMOD), which is increased 2.8-fold in CAD, is found mainly in connective tissue and is a modulator of the TGF-beta signaling pathway. However, not all of the RNA changes may be related to the immune synapse. For example, Nebulette, the most down-regulated transcript (2.4-fold), is found in cardiac muscle; it is a 'cytolinker' that connects actin and desmin to facilitate cytoskeletal function and vesicular movement. The endocytic pathway is further modulated by changes in tubulin, a key microtubule protein, and fidgetin, a tubulin-severing enzyme that is a marker for cardiovascular risk identified by genome-wide association study. Protein recycling would be modulated by changes in the proteasomal regulator SIAH3, and the ubiquitin ligase MARCHF10. On the ciliary aspect of the immune synapse, several of the modulated transcripts are related to ciliary length and function. Stereocilin is a partner to mesothelin, a related super-helical protein, whose transcript is also modulated in CAD. DCDC2, a double-cortin protein, is a modulator of ciliary length. In the signaling pathways of the immune synapse, there were numerous transcripts that related directly to T cell function and the control of differentiation. Butyrophilin is a co-regulator for T cell activation. Fibromodulin is a modulator of the TGF-beta signaling pathway, a primary determinant of Tre differentiation. Further impact on the TGF-beta pathway is reflected in concurrent changes in the BMP receptor 1B RNA (BMPR1B), because the bone morphogenic proteins are members of the TGF-beta superfamily, and likewise impact Treg differentiation. Several of the transcripts (TMEM98, NRCAM, SFRP5, SHISA2) are elements of the Wnt signaling pathway, which is a major determinant of Treg differentiation.
Other
Endometriosis in females under the age of 40.
Depression and hostility appear to be risks.
The number of categories of adverse childhood experiences (psychological, physical, or sexual abuse; violence against mother; or living with household members who used substances, mentally ill, suicidal, or incarcerated) showed a graded correlation with the presence of adult diseases including coronary artery (ischemic heart) disease.
Hemostatic factors: High levels of fibrinogen and coagulation factor VII are associated with an increased risk of CAD.
Low hemoglobin.
In the Asian population, the b fibrinogen gene G-455A polymorphism was associated with the risk of CAD.
Patient-specific vessel ageing or remodelling determines endothelial cell behaviour and thus disease growth and progression. Such 'hemodynamic markers' are thus patient-specific risk surrogates.
HIV is a known risk factor for developing atherosclerosis and coronary artery disease.
Pathophysiology
Limitation of blood flow to the heart causes ischemia (cell starvation secondary to a lack of oxygen) of the heart's muscle cells. The heart's muscle cells may die from lack of oxygen and this is called a myocardial infarction (commonly referred to as a heart attack). It leads to damage, death, and eventual scarring of the heart muscle without regrowth of heart muscle cells. Chronic high-grade narrowing of the coronary arteries can induce transient ischemia which leads to the induction of a ventricular arrhythmia, which may terminate into a dangerous heart rhythm known as ventricular fibrillation, which often leads to death.
Typically, coronary artery disease occurs when part of the smooth, elastic lining inside a coronary artery (the arteries that supply blood to the heart muscle) develops atherosclerosis. With atherosclerosis, the artery's lining becomes hardened, stiffened, and accumulates deposits of calcium, fatty lipids, and abnormal inflammatory cells – to form a plaque. Calcium phosphate (hydroxyapatite) deposits in the muscular layer of the blood vessels appear to play a significant role in stiffening the arteries and inducing the early phase of coronary arteriosclerosis. This can be seen in a so-called metastatic mechanism of calciphylaxis as it occurs in chronic kidney disease and hemodialysis. Although these people have kidney dysfunction, almost fifty percent of them die due to coronary artery disease. Plaques can be thought of as large "pimples" that protrude into the channel of an artery, causing partial obstruction to blood flow. People with coronary artery disease might have just one or two plaques or might have dozens distributed throughout their coronary arteries. A more severe form is chronic total occlusion (CTO) when a coronary artery is completely obstructed for more than 3 months.
Microvascular angina is a type of angina pectoris in which chest pain and chest discomfort occur without signs of blockages in the larger coronary arteries of their hearts when an angiogram (coronary angiogram) is being performed.
The exact cause of microvascular angina is unknown. Explanations include microvascular dysfunction or epicardial atherosclerosis. For reasons that are not well understood, females are more likely than males to have it; however, hormones and other risk factors unique to females may play a role.
Diagnosis
The diagnosis of CAD depends largely on the nature of the symptoms and imaging. The first investigation when CAD is suspected is an electrocardiogram (ECG/EKG), both for stable angina and acute coronary syndrome. An X-ray of the chest, blood tests and resting echocardiography may be performed.
For stable symptomatic patients, several non-invasive tests can diagnose CAD depending on pre-assessment of the risk profile. Noninvasive imaging options include; Computed tomography angiography (CTA) (anatomical imaging, best test in patients with low-risk profile to "rule out" the disease), positron emission tomography (PET), single-photon emission computed tomography (SPECT)/nuclear stress test/myocardial scintigraphy and stress echocardiography (the three latter can be summarized as functional noninvasive methods and are typically better to "rule in"). Exercise ECG or stress test is inferior to non-invasive imaging methods due to the risk of false negative and false positive test results. The use of non-invasive imaging is not recommended on individuals who are exhibiting no symptoms and are otherwise at low risk for developing coronary disease. Invasive testing with coronary angiography (ICA) can be used when non-invasive testing is inconclusive or show a high event risk.
The diagnosis of microvascular angina (previously known as cardiac syndrome X – the rare coronary artery disease that is more common in females, as mentioned, is a diagnosis of exclusion. Therefore, usually, the same tests are used as in any person suspected of having coronary artery disease:
Intravascular ultrasound
Magnetic resonance imaging (MRI)
Stable angina
Stable angina is the most common manifestation of ischemic heart disease, and is associated with reduced quality of life and increased mortality. It is caused by epicardial coronary stenosis which results in reduced blood flow and oxygen supply to the myocardium.
Stable angina is short-term chest pain during physical exertion caused by an imbalance between myocardial oxygen supply and metabolic oxygen demand. Various forms of cardiac stress tests may be used to induce both symptoms and detect changes by way of electrocardiography (using an ECG), echocardiography (using ultrasound of the heart) or scintigraphy (using uptake of radionuclide by the heart muscle). If part of the heart seems to receive an insufficient blood supply, coronary angiography may be used to identify stenosis of the coronary arteries and suitability for angioplasty or bypass surgery.
In minor to moderate cases, nitroglycerine may be used to alleviate acute symptoms of stable angina or may be used immediately before exertion to prevent the onset of angina. Sublingual nitroglycerine is most commonly used to provide rapid relief for acute angina attacks and as a complement to anti-anginal treatments in patients with refractory and recurrent angina. When nitroglycerine enters the bloodstream, it forms free radical nitric oxide, or NO, which activates guanylate cyclase and in turn stimulates the release of cyclic GMP. This molecular signaling stimulates smooth muscle relaxation, ultimately resulting in vasodilation and consequently improved blood flow to regions of the heart affected by atherosclerotic plaque.
Stable coronary artery disease (SCAD) is also often called stable ischemic heart disease (SIHD). A 2015 monograph explains that "Regardless of the nomenclature, stable angina is the chief manifestation of SIHD or SCAD." There are U.S. and European clinical practice guidelines for SIHD/SCAD. In patients with non-severe asymptomatic aortic valve stenosis and no overt coronary artery disease, the increased troponin T (above 14 pg/mL) was found associated with an increased 5-year event rate of ischemic cardiac events (myocardial infarction, percutaneous coronary intervention, or coronary artery bypass surgery).
Acute coronary syndrome
Diagnosis of acute coronary syndrome generally takes place in the emergency department, where ECGs may be performed sequentially to identify "evolving changes" (indicating ongoing damage to the heart muscle). Diagnosis is clear-cut if ECGs show elevation of the "ST segment", which in the context of severe typical chest pain is strongly indicative of an acute myocardial infarction (MI); this is termed a STEMI (ST-elevation MI) and is treated as an emergency with either urgent coronary angiography and percutaneous coronary intervention (angioplasty with or without stent insertion) or with thrombolysis ("clot buster" medication), whichever is available. In the absence of ST-segment elevation, heart damage is detected by cardiac markers (blood tests that identify heart muscle damage). If there is evidence of damage (infarction), the chest pain is attributed to a "non-ST elevation MI" (NSTEMI). If there is no evidence of damage, the term "unstable angina" is used. This process usually necessitates hospital admission and close observation on a coronary care unit for possible complications (such as cardiac arrhythmias – irregularities in the heart rate). Depending on the risk assessment, stress testing or angiography may be used to identify and treat coronary artery disease in patients who have had an NSTEMI or unstable angina.
Risk assessment
There are various risk assessment systems for determining the risk of coronary artery disease, with various emphasis on the different variables above. A notable example is Framingham Score, used in the Framingham Heart Study. It is mainly based on age, gender, diabetes, total cholesterol, HDL cholesterol, tobacco smoking, and systolic blood pressure. When predicting risk in younger adults (18–39 years old), the Framingham Risk Score remains below 10–12% for all deciles of baseline-predicted risk.
Polygenic score is another way of risk assessment. In one study the relative risk of incident coronary events was 91% higher among participants at high genetic risk than among those at low genetic risk.
Prevention
Up to 90% of cardiovascular disease may be preventable if established risk factors are avoided. Prevention involves adequate physical exercise, decreasing obesity, treating high blood pressure, eating a healthy diet, decreasing cholesterol levels, and stopping smoking. Medications and exercise are roughly equally effective. High levels of physical activity reduce the risk of coronary artery disease by about 25%. Life's Essential 8 are the key measures for improving and maintaining cardiovascular health, as defined by the American Heart Association. AHA added sleep as a factor influencing heart health in 2022.
Most guidelines recommend combining these preventive strategies. A 2015 Cochrane Review found some evidence that counseling and education to bring about behavioral change might help in high-risk groups. However, there was insufficient evidence to show an effect on mortality or actual cardiovascular events.
In diabetes mellitus, there is little evidence that very tight blood sugar control improves cardiac risk although improved sugar control appears to decrease other problems such as kidney failure and blindness.
A 2024 study published in The Lancet Diabetes & Endocrinology found that the oral glucose tolerance test (OGTT) is more effective than hemoglobin A1c (HbA1c) for detecting dysglycemia in patients with coronary artery disease. The study highlighted that 2-hour post-load glucose levels of at least 9 mmol/L were strong predictors of cardiovascular outcomes, while HbA1c levels of at least 5.9% were also significant but not independently associated when combined with OGTT results.
Diet
A diet high in fruits and vegetables decreases the risk of cardiovascular disease and death. Vegetarians have a lower risk of heart disease, possibly due to their greater consumption of fruits and vegetables. Evidence also suggests that the Mediterranean diet and a high fiber diet lower the risk.
The consumption of trans fat (commonly found in hydrogenated products such as margarine) has been shown to cause a precursor to atherosclerosis and increase the risk of coronary artery disease.
Evidence does not support a beneficial role for omega-3 fatty acid supplementation in preventing cardiovascular disease (including myocardial infarction and sudden cardiac death). There is tentative evidence that intake of menaquinone (Vitamin K2), but not phylloquinone (Vitamin K1), may reduce the risk of CAD mortality.
Secondary prevention
Secondary prevention is preventing further sequelae of already established disease. Effective lifestyle changes include:
Weight control
Smoking cessation
Avoiding the consumption of trans fats (in partially hydrogenated oils)
Decreasing psychosocial stress
Exercise
Aerobic exercise, like walking, jogging, or swimming, can reduce the risk of mortality from coronary artery disease. Aerobic exercise can help decrease blood pressure and the amount of blood cholesterol (LDL) over time. It also increases HDL cholesterol.
Although exercise is beneficial, it is unclear whether doctors should spend time counseling patients to exercise. The U.S. Preventive Services Task Force found "insufficient evidence" to recommend that doctors counsel patients on exercise but "it did not review the evidence for the effectiveness of physical activity to reduce chronic disease, morbidity, and mortality", only the effectiveness of counseling itself. The American Heart Association, based on a non-systematic review, recommends that doctors counsel patients on exercise.
Psychological symptoms are common in people with CHD, and while many psychological treatments may be offered following cardiac events, there is no evidence that they change mortality, the risk of revascularization procedures, or the rate of non-fatal myocardial infarction.
Antibiotics for secondary prevention of coronary heart disease
Early studies suggested that antibiotics might help patients with coronary disease to reduce the risk of heart attacks and strokes. However, a 2021 Cochrane meta-analysis found that antibiotics given for secondary prevention of coronary heart disease are harmful for people with increased mortality and occurrence of stroke. So, the use of antibiotics is not currently supported for preventing secondary coronary heart disease.
Neuropsychological Assessment
A thorough systematic review found that indeed there is a link between a CHD condition and brain dysfunction in females. Consequently, since research is showing that cardiovascular diseases, like CHD, can play a role as a precursor for dementia, like Alzheimer's disease, individuals with CHD should have a neuropsychological assessment.
Treatment
There are a number of treatment options for coronary artery disease:
Lifestyle changes
Medical treatment – commonly prescribed drugs (e.g., cholesterol lowering medications, beta-blockers, nitroglycerin, calcium channel blockers, etc.);
Coronary interventions as angioplasty and coronary stent;
Coronary artery bypass grafting (CABG)
Medications
Statins, which reduce cholesterol, reduce the risk of coronary artery disease
Nitroglycerin
Calcium channel blockers and/or beta-blockers
Antiplatelet drugs such as aspirin
It is recommended that blood pressure typically be reduced to less than 140/90 mmHg. The diastolic blood pressure however should not be lower than 60 mmHg. Beta-blockers are recommended first line for this use.
Aspirin
In those with no previous history of heart disease, aspirin decreases the risk of a myocardial infarction but does not change the overall risk of death. Aspirin therapy to prevent heart disease is thus recommended only in adults who are at increased risk for cardiovascular events, which may include postmenopausal females, males above 40, and younger people with risk factors for coronary heart disease, including high blood pressure, a family history of heart disease, or diabetes. The benefits outweigh the harms most favorably in people at high risk for a cardiovascular event, where high risk is defined as at least a 3% chance over a five-year period, but others with lower risk may still find the potential benefits worth the associated risks.
Anti-platelet therapy
Clopidogrel plus aspirin (dual anti-platelet therapy) reduces cardiovascular events more than aspirin alone in those with a STEMI. In others at high risk but not having an acute event, the evidence is weak. Specifically, its use does not change the risk of death in this group. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death.
Surgery
Revascularization for acute coronary syndrome has a mortality benefit. Percutaneous revascularization for stable ischaemic heart disease does not appear to have benefits over medical therapy alone. In those with disease in more than one artery, coronary artery bypass grafts appear better than percutaneous coronary interventions. Newer "anaortic" or no-touch off-pump coronary artery revascularization techniques have shown reduced postoperative stroke rates comparable to percutaneous coronary intervention. Hybrid coronary revascularization has also been shown to be a safe and feasible procedure that may offer some advantages over conventional CABG though it is more expensive.
Epidemiology
As of 2010, CAD was the leading cause of death globally resulting in over 7 million deaths. This increased from 5.2 million deaths from CAD worldwide in 1990. It may affect individuals at any age but becomes dramatically more common at progressively older ages, with approximately a tripling with each decade of life. Males are affected more often than females.
The World Health Organization reported that: "The world's biggest killer is ischemic heart disease, responsible for 13% of the world's total deaths. Since 2000, the largest increase in deaths has been for this disease, rising by 2.7 million to 9.1 million deaths in 2021."
It is estimated that 60% of the world's cardiovascular disease burden will occur in the South Asian subcontinent despite only accounting for 20% of the world's population. This may be secondary to a combination of genetic predisposition and environmental factors. Organizations such as the Indian Heart Association are working with the World Heart Federation to raise awareness about this issue.
Coronary artery disease is the leading cause of death for both males and females and accounts for approximately 600,000 deaths in the United States every year. According to present trends in the United States, half of healthy 40-year-old males will develop CAD in the future, and one in three healthy 40-year-old females. It is the most common reason for death of males and females over 20 years of age in the United States.
After analysing data from 2 111 882 patients, the recent meta-analysis revealed that the incidence of coronary artery diseases in breast cancer survivors was 4.29 (95% CI 3.09–5.94) per 1000 person-years.
Society and culture
Names
Other terms sometimes used for this condition are "hardening of the arteries" and "narrowing of the arteries". In Latin it is known as morbus ischaemicus cordis (MIC).
Support groups
The Infarct Combat Project (ICP) is an international nonprofit organization founded in 1998 which tries to decrease ischemic heart diseases through education and research.
Industry influence on research
In 2016 research into the archives of the Sugar Association, the trade association for the sugar industry in the US, had sponsored an influential literature review published in 1965 in the New England Journal of Medicine that downplayed early findings about the role of a diet heavy in sugar in the development of CAD and emphasized the role of fat; that review influenced decades of research funding and guidance on healthy eating.
Research
Research efforts are focused on new angiogenic treatment modalities and various (adult) stem-cell therapies. A region on chromosome 17 was confined to families with multiple cases of myocardial infarction. Other genome-wide studies have identified a firm risk variant on chromosome 9 (9p21.3). However, these and other loci are found in intergenic segments and need further research in understanding how the phenotype is affected.
A more controversial link is that between Chlamydophila pneumoniae infection and atherosclerosis. While this intracellular organism has been demonstrated in atherosclerotic plaques, evidence is inconclusive as to whether it can be considered a causative factor. Treatment with antibiotics in patients with proven atherosclerosis has not demonstrated a decreased risk of heart attacks or other coronary vascular diseases.
Myeloperoxidase has been proposed as a biomarker.
Plant-based nutrition has been suggested as a way to reverse coronary artery disease, but strong evidence is still lacking for claims of potential benefits.
Several immunosuppressive drugs targeting the chronic inflammation in coronary artery disease have been tested.
| Biology and health sciences | Cardiovascular disease | Health |
5879 | https://en.wikipedia.org/wiki/Caesium | Caesium | Caesium (IUPAC spelling; also spelled cesium in American English) is a chemical element; it has symbol Cs and atomic number 55. It is a soft, silvery-golden alkali metal with a melting point of , which makes it one of only five elemental metals that are liquid at or near room temperature. Caesium has physical and chemical properties similar to those of rubidium and potassium. It is pyrophoric and reacts with water even at . It is the least electronegative stable element, with a value of 0.79 on the Pauling scale. It has only one stable isotope, caesium-133. Caesium is mined mostly from pollucite. Caesium-137, a fission product, is extracted from waste produced by nuclear reactors. It has the largest atomic radius of all elements whose radii have been measured or calculated, at about 260 picometres.
The German chemist Robert Bunsen and physicist Gustav Kirchhoff discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium were as a "getter" in vacuum tubes and in photoelectric cells. Caesium is widely used in highly accurate atomic clocks. In 1967, the International System of Units began using a specific hyperfine transition of neutral caesium-133 atoms to define the basic unit of time, the second.
Since the 1990s, the largest application of the element has been as caesium formate for drilling fluids, but it has a range of applications in the production of electricity, in electronics, and in chemistry. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in medical applications, industrial gauges, and hydrology. Nonradioactive caesium compounds are only mildly toxic, but the pure metal's tendency to react explosively with water means that caesium is considered a hazardous material, and the radioisotopes present a significant health and environmental hazard.
Spelling
Caesium is the spelling recommended by the International Union of Pure and Applied Chemistry (IUPAC). The American Chemical Society (ACS) has used the spelling cesium since 1921, following Webster's New International Dictionary. The element was named after the Latin word caesius, meaning "bluish grey". In medieval and early modern writings caesius was spelled with the ligature æ as cæsius; hence, an alternative but now old-fashioned orthography is cæsium. More spelling explanation at ae/oe vs e.
Characteristics
Physical properties
Of all elements that are solid at room temperature, caesium is the softest: it has a hardness of 0.2 Mohs. It is a very ductile, pale metal, which darkens in the presence of trace amounts of oxygen. When in the presence of mineral oil (where it is best kept during transport), it loses its metallic lustre and takes on a duller, grey appearance. It has a melting point of , making it one of the few elemental metals that are liquid near room temperature. The others are rubidium (), francium (estimated at ), mercury (), and gallium (); bromine is also liquid at room temperature (melting at ), but it is a halogen and not a metal. Mercury is the only stable elemental metal with a known melting point lower than caesium. In addition, the metal has a rather low boiling point, , the lowest of all stable metals other than mercury. Copernicium and flerovium have been predicted to have lower boiling points than mercury and caesium, but they are extremely radioactive and it is not certain if they are metals.
Caesium forms alloys with the other alkali metals, gold, and mercury (amalgams). At temperatures below , it does not alloy with cobalt, iron, molybdenum, nickel, platinum, tantalum, or tungsten. It forms well-defined intermetallic compounds with antimony, gallium, indium, and thorium, which are photosensitive. It mixes with all the other alkali metals (except lithium); the alloy with a molar distribution of 41% caesium, 47% potassium, and 12% sodium has the lowest melting point of any known metal alloy, at . A few amalgams have been studied: is black with a purple metallic lustre, while CsHg is golden-coloured, also with a metallic lustre.
The golden colour of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium this frequency is in the ultraviolet, but for caesium it enters the blue–violet end of the spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially while other colours (having lower frequency) are reflected; hence it appears yellowish. Its compounds burn with a blue or violet colour.
Allotropes
Caesium exists in the form of different allotropes, one of them a dimer called dicaesium.
Chemical properties
Caesium metal is highly reactive and pyrophoric. It ignites spontaneously in air, and reacts explosively with water even at low temperatures, more so than the other alkali metals. It reacts with ice at temperatures as low as . Because of this high reactivity, caesium metal is classified as a hazardous material. It is stored and shipped in dry, saturated hydrocarbons such as mineral oil. It can be handled only under inert gas, such as argon. However, a caesium-water explosion is often less powerful than a sodium-water explosion with a similar amount of sodium. This is because caesium explodes instantly upon contact with water, leaving little time for hydrogen to accumulate. Caesium can be stored in vacuum-sealed borosilicate glass ampoules. In quantities of more than about , caesium is shipped in hermetically sealed, stainless steel containers.
The chemistry of caesium is similar to that of other alkali metals, in particular rubidium, the element above caesium in the periodic table. As expected for an alkali metal, the only common oxidation state is +1. It differs from this value in caesides, which contain the Cs− anion and thus have caesium in the −1 oxidation state. Under conditions of extreme pressure (greater than 30 GPa), theoretical studies indicate that the inner 5p electrons could form chemical bonds, where caesium would behave as the seventh 5p element, suggesting that higher caesium fluorides with caesium in oxidation states from +2 to +6 could exist under such conditions. Some slight differences arise from the fact that it has a higher atomic mass and is more electropositive than other (nonradioactive) alkali metals. Caesium is the most electropositive chemical element. The caesium ion is also larger and less "hard" than those of the lighter alkali metals.
Compounds
Most caesium compounds contain the element as the cation , which binds ionically to a wide variety of anions. One noteworthy exception is the caeside anion (), and others are the several suboxides (see section on oxides below). More recently, caesium is predicted to behave as a p-block element and capable of forming higher fluorides with higher oxidation states (i.e., CsFn with n > 1) under high pressure. This prediction needs to be validated by further experiments.
Salts of Cs+ are usually colourless unless the anion itself is coloured. Many of the simple salts are hygroscopic, but less so than the corresponding salts of lighter alkali metals. The phosphate, acetate, carbonate, halides, oxide, nitrate, and sulfate salts are water-soluble. Its double salts are often less soluble, and the low solubility of caesium aluminium sulfate is exploited in refining Cs from ores. The double salts with antimony (such as ), bismuth, cadmium, copper, iron, and lead are also poorly soluble.
Caesium hydroxide (CsOH) is hygroscopic and strongly basic. It rapidly etches the surface of semiconductors such as silicon. CsOH has been previously regarded by chemists as the "strongest base", reflecting the relatively weak attraction between the large Cs+ ion and OH−; it is indeed the strongest Arrhenius base; however, a number of compounds such as n-butyllithium, sodium amide, sodium hydride, caesium hydride, etc., which cannot be dissolved in water as reacting violently with it but rather only used in some anhydrous polar aprotic solvents, are far more basic on the basis of the Brønsted–Lowry acid–base theory.
A stoichiometric mixture of caesium and gold will react to form yellow caesium auride (Cs+Au−) upon heating. The auride anion here behaves as a pseudohalogen. The compound reacts violently with water, yielding caesium hydroxide, metallic gold, and hydrogen gas; in liquid ammonia it can be reacted with a caesium-specific ion exchange resin to produce tetramethylammonium auride. The analogous platinum compound, red caesium platinide (), contains the platinide ion that behaves as a .
Complexes
Like all metal cations, Cs+ forms complexes with Lewis bases in solution. Because of its large size, Cs+ usually adopts coordination numbers greater than 6, the number typical for the smaller alkali metal cations. This difference is apparent in the 8-coordination of CsCl. This high coordination number and softness (tendency to form covalent bonds) are properties exploited in separating Cs+ from other cations in the remediation of nuclear wastes, where 137Cs+ must be separated from large amounts of nonradioactive K+.
Halides
Caesium fluoride (CsF) is a hygroscopic white solid that is widely used in organofluorine chemistry as a source of fluoride anions. Caesium fluoride has the halite structure, which means that the Cs+ and F− pack in a cubic closest packed array as do Na+ and Cl− in sodium chloride. Notably, caesium and fluorine have the lowest and highest electronegativities, respectively, among all the known elements.
Caesium chloride (CsCl) crystallizes in the simple cubic crystal system. Also called the "caesium chloride structure", this structural motif is composed of a primitive cubic lattice with a two-atom basis, each with an eightfold coordination; the chloride atoms lie upon the lattice points at the edges of the cube, while the caesium atoms lie in the holes in the centre of the cubes. This structure is shared with CsBr and CsI, and many other compounds that do not contain Cs. In contrast, most other alkaline halides have the sodium chloride (NaCl) structure. The CsCl structure is preferred because Cs+ has an ionic radius of 174 pm and 181 pm.
Oxides
More so than the other alkali metals, caesium forms numerous binary compounds with oxygen. When caesium burns in air, the superoxide is the main product. The "normal" caesium oxide () forms yellow-orange hexagonal crystals, and is the only oxide of the anti- type. It vaporizes at , and decomposes to caesium metal and the peroxide at temperatures above . In addition to the superoxide and the ozonide , several brightly coloured suboxides have also been studied. These include , , , (dark-green), CsO, , as well as . The latter may be heated in a vacuum to generate . Binary compounds with sulfur, selenium, and tellurium also exist.
Isotopes
Caesium has 41 known isotopes, ranging in mass number (i.e. number of nucleons in the nucleus) from 112 to 152. Several of these are synthesized from lighter elements by the slow neutron capture process (S-process) inside old stars and by the R-process in supernova explosions. The only stable caesium isotope is 133Cs, with 78 neutrons. Although it has a large nuclear spin (+), nuclear magnetic resonance studies can use this isotope at a resonating frequency of 11.7 MHz.
The radioactive 135Cs has a very long half-life of about 2.3 million years, the longest of all radioactive isotopes of caesium. 137Cs and 134Cs have half-lives of 30 and two years, respectively. 137Cs decomposes to a short-lived 137mBa by beta decay, and then to nonradioactive barium, while 134Cs transforms into 134Ba directly. The isotopes with mass numbers of 129, 131, 132 and 136, have half-lives between a day and two weeks, while most of the other isotopes have half-lives from a few seconds to fractions of a second. At least 21 metastable nuclear isomers exist. Other than 134mCs (with a half-life of just under 3 hours), all are very unstable and decay with half-lives of a few minutes or less.
The isotope 135Cs is one of the long-lived fission products of uranium produced in nuclear reactors. However, this fission product yield is reduced in most reactors because the predecessor, 135Xe, is a potent neutron poison and frequently transmutes to stable 136Xe before it can decay to 135Cs.
The beta decay from 137Cs to 137mBa results in gamma radiation as the 137mBa relaxes to ground state 137Ba, with the emitted photons having an energy of 0.6617 MeV. 137Cs and 90Sr are the principal medium-lived products of nuclear fission, and the prime sources of radioactivity from spent nuclear fuel after several years of cooling, lasting several hundred years. Those two isotopes are the largest source of residual radioactivity in the area of the Chernobyl disaster. Because of the low capture rate, disposing of 137Cs through neutron capture is not feasible and the only current solution is to allow it to decay over time.
Almost all caesium produced from nuclear fission comes from the beta decay of originally more neutron-rich fission products, passing through various isotopes of iodine and xenon. Because iodine and xenon are volatile and can diffuse through nuclear fuel or air, radioactive caesium is often created far from the original site of fission. With nuclear weapons testing in the 1950s through the 1980s, 137Cs was released into the atmosphere and returned to the surface of the earth as a component of radioactive fallout. It is a ready marker of the movement of soil and sediment from those times.
Occurrence
Caesium is a relatively rare element, estimated to average 3 parts per million in the Earth's crust. It is the 45th most abundant element and 36th among the metals. Caesium is 30 times less abundant than rubidium, with which it is closely associated, chemically.
Due to its large ionic radius, caesium is one of the "incompatible elements". During magma crystallization, caesium is concentrated in the liquid phase and crystallizes last. Therefore, the largest deposits of caesium are zone pegmatite ore bodies formed by this enrichment process. Because caesium does not substitute for potassium as readily as rubidium does, the alkali evaporite minerals sylvite (KCl) and carnallite () may contain only 0.002% caesium. Consequently, caesium is found in few minerals. Percentage amounts of caesium may be found in beryl () and avogadrite (), up to 15 wt% Cs2O in the closely related mineral pezzottaite (), up to 8.4 wt% Cs2O in the rare mineral londonite (), and less in the more widespread rhodizite. The only economically important ore for caesium is pollucite , which is found in a few places around the world in zoned pegmatites, associated with the more commercially important lithium minerals, lepidolite and petalite. Within the pegmatites, the large grain size and the strong separation of the minerals results in high-grade ore for mining.
The world's most significant and richest known source of caesium is the Tanco Mine at Bernic Lake in Manitoba, Canada, estimated to contain 350,000 metric tons of pollucite ore, representing more than two-thirds of the world's reserve base. Although the stoichiometric content of caesium in pollucite is 42.6%, pure pollucite samples from this deposit contain only about 34% caesium, while the average content is 24 wt%. Commercial pollucite contains more than 19% caesium. The Bikita pegmatite deposit in Zimbabwe is mined for its petalite, but it also contains a significant amount of pollucite. Another notable source of pollucite is in the Karibib Desert, Namibia. At the present rate of world mine production of 5 to 10 metric tons per year, reserves will last for thousands of years.
Production
Mining and refining pollucite ore is a selective process and is conducted on a smaller scale than for most other metals. The ore is crushed, hand-sorted, but not usually concentrated, and then ground. Caesium is then extracted from pollucite primarily by three methods: acid digestion, alkaline decomposition, and direct reduction.
In the acid digestion, the silicate pollucite rock is dissolved with strong acids, such as hydrochloric (HCl), sulfuric (), hydrobromic (HBr), or hydrofluoric (HF) acids. With hydrochloric acid, a mixture of soluble chlorides is produced, and the insoluble chloride double salts of caesium are precipitated as caesium antimony chloride (), caesium iodine chloride (), or caesium hexachlorocerate (). After separation, the pure precipitated double salt is decomposed, and pure CsCl is precipitated by evaporating the water.
The sulfuric acid method yields the insoluble double salt directly as caesium alum (). The aluminium sulfate component is converted to insoluble aluminium oxide by roasting the alum with carbon, and the resulting product is leached with water to yield a solution.
Roasting pollucite with calcium carbonate and calcium chloride yields insoluble calcium silicates and soluble caesium chloride. Leaching with water or dilute ammonia () yields a dilute chloride (CsCl) solution. This solution can be evaporated to produce caesium chloride or transformed into caesium alum or caesium carbonate. Though not commercially feasible, the ore can be directly reduced with potassium, sodium, or calcium in vacuum to produce caesium metal directly.
Most of the mined caesium (as salts) is directly converted into caesium formate (HCOO−Cs+) for applications such as oil drilling. To supply the developing market, Cabot Corporation built a production plant in 1997 at the Tanco mine near Bernic Lake in Manitoba, with a capacity of per year of caesium formate solution. The primary smaller-scale commercial compounds of caesium are caesium chloride and nitrate.
Alternatively, caesium metal may be obtained from the purified compounds derived from the ore. Caesium chloride and the other caesium halides can be reduced at with calcium or barium, and caesium metal distilled from the result. In the same way, the aluminate, carbonate, or hydroxide may be reduced by magnesium.
The metal can also be isolated by electrolysis of fused caesium cyanide (CsCN). Exceptionally pure and gas-free caesium can be produced by thermal decomposition of caesium azide , which can be produced from aqueous caesium sulfate and barium azide. In vacuum applications, caesium dichromate can be reacted with zirconium to produce pure caesium metal without other gaseous products.
+ 2 → 2 + 2 +
The price of 99.8% pure caesium (metal basis) in 2009 was about , but the compounds are significantly cheaper.
History
In 1860, Robert Bunsen and Gustav Kirchhoff discovered caesium in the mineral water from Dürkheim, Germany. Because of the bright blue lines in the emission spectrum, they derived the name from the Latin word , meaning . Caesium was the first element to be discovered with a spectroscope, which had been invented by Bunsen and Kirchhoff only a year previously.
To obtain a pure sample of caesium, of mineral water had to be evaporated to yield of concentrated salt solution. The alkaline earth metals were precipitated either as sulfates or oxalates, leaving the alkali metal in the solution. After conversion to the nitrates and extraction with ethanol, a sodium-free mixture was obtained. From this mixture, the lithium was precipitated by ammonium carbonate. Potassium, rubidium, and caesium form insoluble salts with chloroplatinic acid, but these salts show a slight difference in solubility in hot water, and the less-soluble caesium and rubidium hexachloroplatinate () were obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, caesium and rubidium were separated by the difference in solubility of their carbonates in alcohol. The process yielded of rubidium chloride and of caesium chloride from the initial 44,000 litres of mineral water.
From the caesium chloride, the two scientists estimated the atomic weight of the new element at 123.35 (compared to the currently accepted one of 132.9). They tried to generate elemental caesium by electrolysis of molten caesium chloride, but instead of a metal, they obtained a blue homogeneous substance which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance"; as a result, they assigned it as a subchloride (). In reality, the product was probably a colloidal mixture of the metal and caesium chloride. The electrolysis of the aqueous solution of chloride with a mercury cathode produced a caesium amalgam which readily decomposed under the aqueous conditions. The pure metal was eventually isolated by the Swedish chemist Carl Setterberg while working on his doctorate with Kekulé and Bunsen. In 1882, he produced caesium metal by electrolysing caesium cyanide, avoiding the problems with the chloride.
Historically, the most important use for caesium has been in research and development, primarily in chemical and electrical fields. Very few applications existed for caesium until the 1920s, when it came into use in radio vacuum tubes, where it had two functions; as a getter, it removed excess oxygen after manufacture, and as a coating on the heated cathode, it increased the electrical conductivity. Caesium was not recognized as a high-performance industrial metal until the 1950s. Applications for nonradioactive caesium included photoelectric cells, photomultiplier tubes, optical components of infrared spectrophotometers, catalysts for several organic reactions, crystals for scintillation counters, and in magnetohydrodynamic power generators. Caesium is also used as a source of positive ions in secondary ion mass spectrometry (SIMS).
Since 1967, the International System of Measurements has based the primary unit of time, the second, on the properties of caesium. The International System of Units (SI) defines the second as the duration of 9,192,631,770 cycles at the microwave frequency of the spectral line corresponding to the transition between two hyperfine energy levels of the ground state of caesium-133. The 13th General Conference on Weights and Measures of 1967 defined a second as: "the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of caesium-133 atoms in their ground state undisturbed by external fields".
Applications
Petroleum exploration
The largest present-day use of nonradioactive caesium is in caesium formate drilling fluids for the extractive oil industry. Aqueous solutions of caesium formate (HCOO−Cs+)—made by reacting caesium hydroxide with formic acid—were developed in the mid-1990s for use as oil well drilling and completion fluids. The function of a drilling fluid is to lubricate drill bits, to bring rock cuttings to the surface, and to maintain pressure on the formation during drilling of the well. Completion fluids assist the emplacement of control hardware after drilling but prior to production by maintaining the pressure.
The high density of the caesium formate brine (up to 2.3 g/cm3, or 19.2 pounds per gallon), coupled with the relatively benign nature of most caesium compounds, reduces the requirement for toxic high-density suspended solids in the drilling fluid—a significant technological, engineering and environmental advantage. Unlike the components of many other heavy liquids, caesium formate is relatively environment-friendly. Caesium formate brine can be blended with potassium and sodium formates to decrease the density of the fluids to that of water (1.0 g/cm3, or 8.3 pounds per gallon). Furthermore, it is biodegradable and may be recycled, which is important in view of its high cost (about $4,000 per barrel in 2001). Alkali formates are safe to handle and do not damage the producing formation or downhole metals as corrosive alternative, high-density brines (such as zinc bromide solutions) sometimes do; they also require less cleanup and reduce disposal costs.
Atomic clocks
Caesium-based atomic clocks use the electromagnetic transitions in the hyperfine structure of caesium-133 atoms as a reference point. The first accurate caesium clock was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Caesium clocks have improved over the past half-century and are regarded as "the most accurate realization of a unit that mankind has yet achieved." These clocks measure frequency with an error of 2 to 3 parts in 1014, which corresponds to an accuracy of 2 nanoseconds per day, or one second in 1.4 million years. The latest versions are more accurate than 1 part in 1015, about 1 second in 20 million years. The caesium standard is the primary standard for standards-compliant time and frequency measurements. Caesium clocks regulate the timing of cell phone networks and the Internet.
Definition of the second
The second, symbol s, is the SI unit of time. The BIPM restated its definition at its 26th conference in 2018: "[The second] is defined by taking the fixed numerical value of the caesium frequency , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to s−1."
Electric power and electronics
Caesium vapour thermionic generators are low-power devices that convert heat energy to electrical energy. In the two-electrode vacuum tube converter, caesium neutralizes the space charge near the cathode and enhances the current flow.
Caesium is also important for its photoemissive properties, converting light to electron flow. It is used in photoelectric cells because caesium-based cathodes, such as the intermetallic compound , have a low threshold voltage for emission of electrons. The range of photoemissive devices using caesium include optical character recognition devices, photomultiplier tubes, and video camera tubes. Nevertheless, germanium, rubidium, selenium, silicon, tellurium, and several other elements can be substituted for caesium in photosensitive materials.
Caesium iodide (CsI), bromide (CsBr) and fluoride (CsF) crystals are employed for scintillators in scintillation counters widely used in mineral exploration and particle physics research to detect gamma and X-ray radiation. Being a heavy element, caesium provides good stopping power with better detection. Caesium compounds may provide a faster response (CsF) and be less hygroscopic (CsI).
Caesium vapour is used in many common magnetometers.
The element is used as an internal standard in spectrophotometry. Like other alkali metals, caesium has a great affinity for oxygen and is used as a "getter" in vacuum tubes. Other uses of the metal include high-energy lasers, vapour glow lamps, and vapour rectifiers.
Centrifugation fluids
The high density of the caesium ion makes solutions of caesium chloride, caesium sulfate, and caesium trifluoroacetate () useful in molecular biology for density gradient ultracentrifugation. This technology is used primarily in the isolation of viral particles, subcellular organelles and fractions, and nucleic acids from biological samples.
Chemical and medical use
Relatively few chemical applications use caesium. Doping with caesium compounds enhances the effectiveness of several metal-ion catalysts for chemical synthesis, such as acrylic acid, anthraquinone, ethylene oxide, methanol, phthalic anhydride, styrene, methyl methacrylate monomers, and various olefins. It is also used in the catalytic conversion of sulfur dioxide into sulfur trioxide in the production of sulfuric acid.
Caesium fluoride enjoys a niche use in organic chemistry as a base and as an anhydrous source of fluoride ion. Caesium salts sometimes replace potassium or sodium salts in organic synthesis, such as cyclization, esterification, and polymerization. Caesium has also been used in thermoluminescent radiation dosimetry (TLD): When exposed to radiation, it acquires crystal defects that, when heated, revert with emission of light proportionate to the received dose. Thus, measuring the light pulse with a photomultiplier tube can allow the accumulated radiation dose to be quantified.
Nuclear and isotope applications
Caesium-137 is a radioisotope commonly used as a gamma-emitter in industrial applications. Its advantages include a half-life of roughly 30 years, its availability from the nuclear fuel cycle, and having 137Ba as a stable end product. The high water solubility is a disadvantage which makes it incompatible with large pool irradiators for food and medical supplies. It has been used in agriculture, cancer treatment, and the sterilization of food, sewage sludge, and surgical equipment. Radioactive isotopes of caesium in radiation devices were used in the medical field to treat certain types of cancer, but emergence of better alternatives and the use of water-soluble caesium chloride in the sources, which could create wide-ranging contamination, gradually put some of these caesium sources out of use. Caesium-137 has been employed in a variety of industrial measurement gauges, including moisture, density, levelling, and thickness gauges. It has also been used in well logging devices for measuring the electron density of the rock formations, which is analogous to the bulk density of the formations.
Caesium-137 has been used in hydrologic studies analogous to those with tritium. As a daughter product of fission bomb testing from the 1950s through the mid-1980s, caesium-137 was released into the atmosphere, where it was absorbed readily into solution. Known year-to-year variation within that period allows correlation with soil and sediment layers. Caesium-134, and to a lesser extent caesium-135, have also been used in hydrology to measure the caesium output by the nuclear power industry. While they are less prevalent than either caesium-133 or caesium-137, these bellwether isotopes are produced solely from anthropogenic sources.
Other uses
Caesium and mercury were used as a propellant in early ion engines designed for spacecraft propulsion on very long interplanetary or extraplanetary missions. The fuel was ionized by contact with a charged tungsten electrode. But corrosion by caesium on spacecraft components has pushed development in the direction of inert gas propellants, such as xenon, which are easier to handle in ground-based tests and do less potential damage to the spacecraft. Xenon was used in the experimental spacecraft Deep Space 1 launched in 1998. Nevertheless, field-emission electric propulsion thrusters that accelerate liquid metal ions such as caesium have been built.
Caesium nitrate is used as an oxidizer and pyrotechnic colorant to burn silicon in infrared flares, such as the LUU-19 flare, because it emits much of its light in the near infrared spectrum. Caesium compounds may have been used as fuel additives to reduce the radar signature of exhaust plumes in the Lockheed A-12 CIA reconnaissance aircraft. Caesium and rubidium have been added as a carbonate to glass because they reduce electrical conductivity and improve stability and durability of fibre optics and night vision devices. Caesium fluoride or caesium aluminium fluoride are used in fluxes formulated for brazing aluminium alloys that contain magnesium.
Magnetohydrodynamic (MHD) power-generating systems were researched, but failed to gain widespread acceptance. Caesium metal has also been considered as the working fluid in high-temperature Rankine cycle turboelectric generators.
Caesium salts have been evaluated as antishock reagents following the administration of arsenical drugs. Because of their effect on heart rhythms, however, they are less likely to be used than potassium or rubidium salts. They have also been used to treat epilepsy.
Caesium-133 can be laser cooled and used to probe fundamental and technological problems in quantum physics. It has a particularly convenient Feshbach spectrum to enable studies of ultracold atoms requiring tunable interactions.
Health and safety hazards
Nonradioactive caesium compounds are only mildly toxic, and nonradioactive caesium is not a significant environmental hazard. Because biochemical processes can confuse and substitute caesium with potassium, excess caesium can lead to hypokalemia, arrhythmia, and acute cardiac arrest, but such amounts would not ordinarily be encountered in natural sources.
The median lethal dose (LD50) for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD50 values of potassium chloride and sodium chloride. The principal use of nonradioactive caesium is as caesium formate in petroleum drilling fluids because it is much less toxic than alternatives, though it is more costly.
Caesium is one of the most reactive elements and is highly explosive in the presence of water. The hydrogen gas produced by the reaction is heated by the thermal energy released at the same time, causing ignition and a violent explosion. This can occur with other alkali metals, but caesium is so potent that this explosive reaction can be triggered even by cold water.
It is highly pyrophoric: the autoignition temperature of caesium is , and it ignites explosively in air to form caesium hydroxide and various oxides. Caesium hydroxide is a very strong base, and will rapidly corrode glass.
The isotopes 134 and 137 are present in the biosphere in small amounts from human activities, differing by location. Radiocaesium does not accumulate in the body as readily as other fission products (such as radioiodine and radiostrontium). About 10% of absorbed radiocaesium washes out of the body relatively quickly in sweat and urine. The remaining 90% has a biological half-life between 50 and 150 days. Radiocaesium follows potassium and tends to accumulate in plant tissues, including fruits and vegetables. Plants vary widely in the absorption of caesium, sometimes displaying great resistance to it. It is also well-documented that mushrooms from contaminated forests accumulate radiocaesium (caesium-137) in the fungal sporocarps. Accumulation of caesium-137 in lakes has been a great concern after the Chernobyl disaster. Experiments with dogs showed that a single dose of 3.8 millicuries (140 MBq, 4.1 μg of caesium-137) per kilogram is lethal within three weeks; smaller amounts may cause infertility and cancer. The International Atomic Energy Agency and other sources have warned that radioactive materials, such as caesium-137, could be used in radiological dispersion devices, or "dirty bombs".
| Physical sciences | Chemical elements_2 | null |
5881 | https://en.wikipedia.org/wiki/Century | Century | A century is a period of 100 years or 10 decades. Centuries are numbered ordinally in English and many other languages. The word century comes from the Latin centum, meaning one hundred. Century is sometimes abbreviated as c.
A centennial or centenary is a hundredth anniversary, or a celebration of this, typically the remembrance of an event which took place a hundred years earlier.
Start and end of centuries
Although a century can mean any arbitrary period of 100 years, there are two viewpoints on the nature of standard centuries. One is based on strict construction, while the other is based on popular perception.
According to the strict construction, the 1st century AD, which began with AD 1, ended with AD 100, and the 2nd century with AD 200; in this model, the n-th century starts with a year that follows a year with a multiple of 100 (except the first century as it began after the year 1 BC) and ends with the next coming year with a multiple of 100 (100n), i.e. the 20th century comprises the years 1901 to 2000, and the 21st century comprises the years 2001 to 2100 in strict usage.
In common perception and practice, centuries are structured by grouping years based on sharing the 'hundreds' digit(s). In this model, the n-th century starts with the year that ends in "00" and ends with the year ending in "99"; for example, in popular culture, the years 1900 to 1999 constitute the 20th century, and the years 2000 to 2099 constitute the 21st century. (This is similar to the grouping of "0-to-9 decades" which share the 'tens' digit.)
To facilitate calendrical calculations by computer, the astronomical year numbering and ISO 8601 systems both contain a year zero, with the astronomical year 0 corresponding to the year 1 BC, the astronomical year -1 corresponding to 2 BC, and so on.
Alternative naming systems
Informally, years may be referred to in groups based on the hundreds part of the year. In this system, the years 1900–1999 are referred to as the nineteen hundreds (1900s). Aside from English usage, this system is used in Swedish, Danish, Norwegian, Icelandic, Finnish and Hungarian. The Swedish (or ), Danish (or ), Norwegian (or ), Finnish (or ) and Hungarian (or ) refer unambiguously to the years 1900–1999. In Swedish, however, a century is in more rare cases referred to as ("the n-th century") rather than , i.e. the 17th century is (in rare cases) referred to as rather than 1600-talet and mainly also referring to the years 1601–1700 rather than 1600–1699; according to Svenska Akademiens ordbok, may refer to either the years 1501–1600 or 1500–1599.
Similar dating units in other calendar systems
While the century has been commonly used in the West, other cultures and calendars have utilized differently sized groups of years in a similar manner. The Hindu calendar, in particular, summarizes its years into groups of 60, while the Aztec calendar considers groups of 52.
| Physical sciences | Time | Basics and measurement |
5905 | https://en.wikipedia.org/wiki/Chalcogen | Chalcogen | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
| 2
|
|-
! 3
|
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
The chalcogens (ore forming) ( ) are the chemical elements in group 16 of the periodic table. This group is also known as the oxygen family. Group 16 consists of the elements oxygen (O), sulfur (S), selenium (Se), tellurium (Te), and the radioactive elements polonium (Po) and livermorium (Lv). Often, oxygen is treated separately from the other chalcogens, sometimes even excluded from the scope of the term "chalcogen" altogether, due to its very different chemical behavior from sulfur, selenium, tellurium, and polonium. The word "chalcogen" is derived from a combination of the Greek word () principally meaning copper (the term was also used for bronze, brass, any metal in the poetic sense, ore and coin), and the Latinized Greek word , meaning born or produced.
Sulfur has been known since antiquity, and oxygen was recognized as an element in the 18th century. Selenium, tellurium and polonium were discovered in the 19th century, and livermorium in 2000. All of the chalcogens have six valence electrons, leaving them two electrons short of a full outer shell. Their most common oxidation states are −2, +2, +4, and +6. They have relatively low atomic radii, especially the lighter ones.
All of the naturally occurring chalcogens have some role in biological functions, either as a nutrient or a toxin. Selenium is an important nutrient (among others as a building block of selenocysteine) but is also commonly toxic. Tellurium often has unpleasant effects (although some organisms can use it), and polonium (especially the isotope polonium-210) is always harmful as a result of its radioactivity.
Sulfur has more than 20 allotropes, oxygen has nine, selenium has at least eight, polonium has two, and only one crystal structure of tellurium has so far been discovered. There are numerous organic chalcogen compounds. Not counting oxygen, organic sulfur compounds are generally the most common, followed by organic selenium compounds and organic tellurium compounds. This trend also occurs with chalcogen pnictides and compounds containing chalcogens and carbon group elements.
Oxygen is generally obtained by separation of air into nitrogen and oxygen. Sulfur is extracted from oil and natural gas. Selenium and tellurium are produced as byproducts of copper refining. Polonium is most available in naturally occurring actinide-containing materials. Livermorium has been synthesized in particle accelerators. The primary use of elemental oxygen is in steelmaking. Sulfur is mostly converted into sulfuric acid, which is heavily used in the chemical industry. Selenium's most common application is glassmaking. Tellurium compounds are mostly used in optical disks, electronic devices, and solar cells. Some of polonium's applications are due to its radioactivity.
Properties
Atomic and physical
Chalcogens show similar patterns in electron configuration, especially in the outermost shells, where they all have the same number of valence electrons, resulting in similar trends in chemical behavior:
All chalcogens have six valence electrons. All of the solid, stable chalcogens are soft and do not conduct heat well. Electronegativity decreases towards the chalcogens with higher atomic numbers. Density, melting and boiling points, and atomic and ionic radii tend to increase towards the chalcogens with higher atomic numbers.
Isotopes
Out of the six known chalcogens, one (oxygen) has an atomic number equal to a nuclear magic number, which means that their atomic nuclei tend to have increased stability towards radioactive decay. Oxygen has three stable isotopes, and 14 unstable ones. Sulfur has four stable isotopes, 20 radioactive ones, and one isomer. Selenium has six observationally stable or nearly stable isotopes, 26 radioactive isotopes, and 9 isomers. Tellurium has eight stable or nearly stable isotopes, 31 unstable ones, and 17 isomers. Polonium has 42 isotopes, none of which are stable. It has an additional 28 isomers. In addition to the stable isotopes, some radioactive chalcogen isotopes occur in nature, either because they are decay products, such as 210Po, because they are primordial, such as 82Se, because of cosmic ray spallation, or via nuclear fission of uranium. Livermorium isotopes 288Lv through 293Lv have been discovered; the most stable livermorium isotope is 293Lv, which has a half-life of 0.061 seconds.
With the exception of livermorium, all chalcogens have at least one naturally occurring radioisotope: oxygen has trace 15O, sulfur has trace 35S, selenium has 82Se, tellurium has 128Te and 130Te, and polonium has 210Po.
Among the lighter chalcogens (oxygen and sulfur), the most neutron-poor isotopes undergo proton emission, the moderately neutron-poor isotopes undergo electron capture or β+ decay, the moderately neutron-rich isotopes undergo β− decay, and the most neutron rich isotopes undergo neutron emission. The middle chalcogens (selenium and tellurium) have similar decay tendencies as the lighter chalcogens, but no proton-emitting isotopes have been observed, and some of the most neutron-deficient isotopes of tellurium undergo alpha decay. Polonium isotopes tend to decay via alpha or beta decay. Isotopes with nonzero nuclear spins are more abundant in nature among the chalcogens selenium and tellurium than they are with sulfur.
Allotropes
Oxygen's most common allotrope is diatomic oxygen, or O2, a reactive paramagnetic molecule that is ubiquitous to aerobic organisms and has a blue color in its liquid state. Another allotrope is O3, or ozone, which is three oxygen atoms bonded together in a bent formation. There is also an allotrope called tetraoxygen, or O4, and six allotropes of solid oxygen including "red oxygen", which has the formula O8.
Sulfur has over 20 known allotropes, which is more than any other element except carbon. The most common allotropes are in the form of eight-atom rings, but other molecular allotropes that contain as few as two atoms or as many as 20 are known. Other notable sulfur allotropes include rhombic sulfur and monoclinic sulfur. Rhombic sulfur is the more stable of the two allotropes. Monoclinic sulfur takes the form of long needles and is formed when liquid sulfur is cooled to slightly below its melting point. The atoms in liquid sulfur are generally in the form of long chains, but above 190 °C, the chains begin to break down. If liquid sulfur above 190 °C is frozen very rapidly, the resulting sulfur is amorphous or "plastic" sulfur. Gaseous sulfur is a mixture of diatomic sulfur (S2) and 8-atom rings.
Selenium has at least eight distinct allotropes. The gray allotrope, commonly referred to as the "metallic" allotrope, despite not being a metal, is stable and has a hexagonal crystal structure. The gray allotrope of selenium is soft, with a Mohs hardness of 2, and brittle. Four other allotropes of selenium are metastable. These include two monoclinic red allotropes and two amorphous allotropes, one of which is red and one of which is black. The red allotrope converts to the black allotrope in the presence of heat. The gray allotrope of selenium is made from spirals on selenium atoms, while one of the red allotropes is made of stacks of selenium rings (Se8).
Tellurium is not known to have any allotropes, although its typical form is hexagonal. Polonium has two allotropes, which are known as α-polonium and β-polonium. α-polonium has a cubic crystal structure and converts to the rhombohedral β-polonium at 36 °C.
The chalcogens have varying crystal structures. Oxygen's crystal structure is monoclinic, sulfur's is orthorhombic, selenium and tellurium have the hexagonal crystal structure, while polonium has a cubic crystal structure.
Chemical
Oxygen, sulfur, and selenium are nonmetals, and tellurium is a metalloid, meaning that its chemical properties are between those of a metal and those of a nonmetal. It is not certain whether polonium is a metal or a metalloid. Some sources refer to polonium as a metalloid, although it has some metallic properties. Also, some allotropes of selenium display characteristics of a metalloid, even though selenium is usually considered a nonmetal. Even though oxygen is a chalcogen, its chemical properties are different from those of other chalcogens. One reason for this is that the heavier chalcogens have vacant d-orbitals. Oxygen's electronegativity is also much higher than those of the other chalcogens. This makes oxygen's electric polarizability several times lower than those of the other chalcogens.
For covalent bonding a chalcogen may accept two electrons according to the octet rule, leaving two lone pairs. When an atom forms two single bonds, they form an angle between 90° and 120°. In 1+ cations, such as , a chalcogen forms three molecular orbitals arranged in a trigonal pyramidal fashion and one lone pair. Double bonds are also common in chalcogen compounds, for example in chalcogenates (see below).
The oxidation number of the most common chalcogen compounds with positive metals is −2. However the tendency for chalcogens to form compounds in the −2 state decreases towards the heavier chalcogens. Other oxidation numbers, such as −1 in pyrite and peroxide, do occur. The highest formal oxidation number is +6. This oxidation number is found in sulfates, selenates, tellurates, polonates, and their corresponding acids, such as sulfuric acid.
Oxygen is the most electronegative element except for fluorine, and forms compounds with almost all of the chemical elements, including some of the noble gases. It commonly bonds with many metals and metalloids to form oxides, including iron oxide, titanium oxide, and silicon oxide. Oxygen's most common oxidation state is −2, and the oxidation state −1 is also relatively common. With hydrogen it forms water and hydrogen peroxide. Organic oxygen compounds are ubiquitous in organic chemistry.
Sulfur's oxidation states are −2, +2, +4, and +6. Sulfur-containing analogs of oxygen compounds often have the prefix thio-. Sulfur's chemistry is similar to oxygen's, in many ways. One difference is that sulfur-sulfur double bonds are far weaker than oxygen-oxygen double bonds, but sulfur-sulfur single bonds are stronger than oxygen-oxygen single bonds. Organic sulfur compounds such as thiols have a strong specific smell, and a few are utilized by some organisms.
Selenium's oxidation states are −2, +4, and +6. Selenium, like most chalcogens, bonds with oxygen. There are some organic selenium compounds, such as selenoproteins. Tellurium's oxidation states are −2, +2, +4, and +6. Tellurium forms the oxides tellurium monoxide, tellurium dioxide, and tellurium trioxide. Polonium's oxidation states are +2 and +4.
There are many acids containing chalcogens, including sulfuric acid, sulfurous acid, selenic acid, and telluric acid. All hydrogen chalcogenides are toxic except for water. Oxygen ions often come in the forms of oxide ions (), peroxide ions (), and hydroxide ions (). Sulfur ions generally come in the form of sulfides (), bisulfides (), sulfites (), sulfates (), and thiosulfates (). Selenium ions usually come in the form of selenides (), selenites () and selenates (). Tellurium ions often come in the form of tellurates (). Molecules containing metal bonded to chalcogens are common as minerals. For example, pyrite (FeS2) is an iron ore, and the rare mineral calaverite is the ditelluride .
Although all group 16 elements of the periodic table, including oxygen, can be defined as chalcogens, oxygen and oxides are usually distinguished from chalcogens and chalcogenides. The term chalcogenide is more commonly reserved for sulfides, selenides, and tellurides, rather than for oxides.
Except for polonium, the chalcogens are all fairly similar to each other chemically. They all form X2− ions when reacting with electropositive metals.
Sulfide minerals and analogous compounds produce gases upon reaction with oxygen.
Compounds
With halogens
Chalcogens also form compounds with halogens known as chalcohalides, or chalcogen halides. The majority of simple chalcogen halides are well-known and widely used as chemical reagents. However, more complicated chalcogen halides, such as sulfenyl, sulfonyl, and sulfuryl halides, are less well known to science. Out of the compounds consisting purely of chalcogens and halogens, there are a total of 13 chalcogen fluorides, nine chalcogen chlorides, eight chalcogen bromides, and six chalcogen iodides that are known. The heavier chalcogen halides often have significant molecular interactions. Sulfur fluorides with low valences are fairly unstable and little is known about their properties. However, sulfur fluorides with high valences, such as sulfur hexafluoride, are stable and well-known. Sulfur tetrafluoride is also a well-known sulfur fluoride. Certain selenium fluorides, such as selenium difluoride, have been produced in small amounts. The crystal structures of both selenium tetrafluoride and tellurium tetrafluoride are known. Chalcogen chlorides and bromides have also been explored. In particular, selenium dichloride and sulfur dichloride can react to form organic selenium compounds. Dichalcogen dihalides, such as Se2Cl2 also are known to exist. There are also mixed chalcogen-halogen compounds. These include SeSX, with X being chlorine or bromine. Such compounds can form in mixtures of sulfur dichloride and selenium halides. These compounds have been fairly recently structurally characterized, as of 2008. In general, diselenium and disulfur chlorides and bromides are useful chemical reagents. Chalcogen halides with attached metal atoms are soluble in organic solutions. One example of such a compound is . Unlike selenium chlorides and bromides, selenium iodides have not been isolated, as of 2008, although it is likely that they occur in solution. Diselenium diiodide, however, does occur in equilibrium with selenium atoms and iodine molecules. Some tellurium halides with low valences, such as and , form polymers when in the solid state. These tellurium halides can be synthesized by the reduction of pure tellurium with superhydride and reacting the resulting product with tellurium tetrahalides. Ditellurium dihalides tend to get less stable as the halides become lower in atomic number and atomic mass. Tellurium also forms iodides with even fewer iodine atoms than diiodides. These include TeI and Te2I. These compounds have extended structures in the solid state. Halogens and chalcogens can also form halochalcogenate anions.
Organic
Alcohols, phenols and other similar compounds contain oxygen. However, in thiols, selenols and tellurols; sulfur, selenium, and tellurium replace oxygen. Thiols are better known than selenols or tellurols. Aside from alcohols, thiols are the most stable chalcogenols and tellurols are the least stable, being unstable in heat or light. Other organic chalcogen compounds include thioethers, selenoethers and telluroethers. Some of these, such as dimethyl sulfide, diethyl sulfide, and dipropyl sulfide are commercially available. Selenoethers are in the form of R2Se or RSeR. Telluroethers such as dimethyl telluride are typically prepared in the same way as thioethers and selenoethers. Organic chalcogen compounds, especially organic sulfur compounds, have the tendency to smell unpleasant. Dimethyl telluride also smells unpleasant, and selenophenol is renowned for its "metaphysical stench". There are also thioketones, selenoketones, and telluroketones. Out of these, thioketones are the most well-studied with 80% of chalcogenoketones papers being about them. Selenoketones make up 16% of such papers and telluroketones make up 4% of them. Thioketones have well-studied non-linear electric and photophysical properties. Selenoketones are less stable than thioketones and telluroketones are less stable than selenoketones. Telluroketones have the highest level of polarity of chalcogenoketones.
With metals
There is a very large number of metal chalcogenides. There are also ternary compounds containing alkali metals and transition metals. Highly metal-rich metal chalcogenides, such as Lu7Te and Lu8Te have domains of the metal's crystal lattice containing chalcogen atoms. While these compounds do exist, analogous chemicals that contain lanthanum, praseodymium, gadolinium, holmium, terbium, or ytterbium have not been discovered, as of 2008. The boron group metals aluminum, gallium, and indium also form bonds to chalcogens. The Ti3+ ion forms chalcogenide dimers such as TiTl5Se8. Metal chalcogenide dimers also occur as lower tellurides, such as Zr5Te6.
Elemental chalcogens react with certain lanthanide compounds to form lanthanide clusters rich in chalcogens. Uranium(IV) chalcogenol compounds also exist. There are also transition metal chalcogenols which have potential to serve as catalysts and stabilize nanoparticles.
With pnictogens
Compounds with chalcogen-phosphorus bonds have been explored for more than 200 years. These compounds include unsophisticated phosphorus chalcogenides as well as large molecules with biological roles and phosphorus-chalcogen compounds with metal clusters. These compounds have numerous applications, including organo-phosphate insecticides, strike-anywhere matches and quantum dots. A total of 130,000 compounds with at least one phosphorus-sulfur bond, 6000 compounds with at least one phosphorus-selenium bond, and 350 compounds with at least one phosphorus-tellurium bond have been discovered. The decrease in the number of chalcogen-phosphorus compounds further down the periodic table is due to diminishing bond strength. Such compounds tend to have at least one phosphorus atom in the center, surrounded by four chalcogens and side chains. However, some phosphorus-chalcogen compounds also contain hydrogen (such as secondary phosphine chalcogenides) or nitrogen (such as dichalcogenoimidodiphosphates). Phosphorus selenides are typically harder to handle that phosphorus sulfides, and compounds in the form PxTey have not been discovered. Chalcogens also bond with other pnictogens, such as arsenic, antimony, and bismuth. Heavier chalcogen pnictides tend to form ribbon-like polymers instead of individual molecules. Chemical formulas of these compounds include Bi2S3 and Sb2Se3. Ternary chalcogen pnictides are also known. Examples of these include P4O6Se and P3SbS3. salts containing chalcogens and pnictogens also exist. Almost all chalcogen pnictide salts are typically in the form of [PnxE4x]3−, where Pn is a pnictogen and E is a chalcogen. Tertiary phosphines can react with chalcogens to form compounds in the form of R3PE, where E is a chalcogen. When E is sulfur, these compounds are relatively stable, but they are less so when E is selenium or tellurium. Similarly, secondary phosphines can react with chalcogens to form secondary phosphine chalcogenides. However, these compounds are in a state of equilibrium with chalcogenophosphinous acid. Secondary phosphine chalcogenides are weak acids. Binary compounds consisting of antimony or arsenic and a chalcogen. These compounds tend to be colorful and can be created by a reaction of the constituent elements at temperatures of .
Other
Chalcogens form single bonds and double bonds with other carbon group elements than carbon, such as silicon, germanium, and tin. Such compounds typically form from a reaction of carbon group halides and chalcogenol salts or chalcogenol bases. Cyclic compounds with chalcogens, carbon group elements, and boron atoms exist, and occur from the reaction of boron dichalcogenates and carbon group metal halides. Compounds in the form of M-E, where M is silicon, germanium, or tin, and E is sulfur, selenium or tellurium have been discovered. These form when carbon group hydrides react or when heavier versions of carbenes react. Sulfur and tellurium can bond with organic compounds containing both silicon and phosphorus.
All of the chalcogens form hydrides. In some cases this occurs with chalcogens bonding with two hydrogen atoms. However tellurium hydride and polonium hydride are both volatile and highly labile. Also, oxygen can bond to hydrogen in a 1:1 ratio as in hydrogen peroxide, but this compound is unstable.
Chalcogen compounds form a number of interchalcogens. For instance, sulfur forms the toxic sulfur dioxide and sulfur trioxide. Tellurium also forms oxides. There are some chalcogen sulfides as well. These include selenium sulfide, an ingredient in some shampoos.
Since 1990, a number of borides with chalcogens bonded to them have been detected. The chalcogens in these compounds are mostly sulfur, although some do contain selenium instead. One such chalcogen boride consists of two molecules of dimethyl sulfide attached to a boron-hydrogen molecule. Other important boron-chalcogen compounds include macropolyhedral systems. Such compounds tend to feature sulfur as the chalcogen. There are also chalcogen borides with two, three, or four chalcogens. Many of these contain sulfur but some, such as Na2B2Se7 contain selenium instead.
History
Early discoveries
Sulfur has been known since ancient times and is mentioned in the Bible fifteen times. It was known to the ancient Greeks and commonly mined by the ancient Romans. In the Middle Ages, it was a key part of alchemical experiments. In the 1700s and 1800s, scientists Joseph Louis Gay-Lussac and Louis-Jacques Thénard proved sulfur to be a chemical element.
Early attempts to separate oxygen from air were hampered by the fact that air was thought of as a single element up to the 17th and 18th centuries. Robert Hooke, Mikhail Lomonosov, Ole Borch, and Pierre Bayden all successfully created oxygen, but did not realize it at the time. Oxygen was discovered by Joseph Priestley in 1774 when he focused sunlight on a sample of mercuric oxide and collected the resulting gas. Carl Wilhelm Scheele had also created oxygen in 1771 by the same method, but Scheele did not publish his results until 1777.
Tellurium was first discovered in 1783 by Franz Joseph Müller von Reichenstein. He discovered tellurium in a sample of what is now known as calaverite. Müller assumed at first that the sample was pure antimony, but tests he ran on the sample did not agree with this. Muller then guessed that the sample was bismuth sulfide, but tests confirmed that the sample was not that. For some years, Muller pondered the problem. Eventually he realized that the sample was gold bonded with an unknown element. In 1796, Müller sent part of the sample to the German chemist Martin Klaproth, who purified the undiscovered element. Klaproth decided to call the element tellurium after the Latin word for earth.
Selenium was discovered in 1817 by Jöns Jacob Berzelius. Berzelius noticed a reddish-brown sediment at a sulfuric acid manufacturing plant. The sample was thought to contain arsenic. Berzelius initially thought that the sediment contained tellurium, but came to realize that it also contained a new element, which he named selenium after the Greek moon goddess Selene.
Periodic table placing
Three of the chalcogens (sulfur, selenium, and tellurium) were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner as having similar properties. Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music. His version included a "group b" consisting of oxygen, sulfur, selenium, tellurium, and osmium.
After 1869, Dmitri Mendeleev proposed his periodic table placing oxygen at the top of "group VI" above sulfur, selenium, and tellurium. Chromium, molybdenum, tungsten, and uranium were sometimes included in this group, but they would be later rearranged as part of group VIB; uranium would later be moved to the actinide series. Oxygen, along with sulfur, selenium, tellurium, and later polonium would be grouped in group VIA, until the group's name was changed to group 16 in 1988.
Modern discoveries
In the late 19th century, Marie Curie and Pierre Curie discovered that a sample of pitchblende was emitting four times as much radioactivity as could be explained by the presence of uranium alone. The Curies gathered several tons of pitchblende and refined it for several months until they had a pure sample of polonium. The discovery officially took place in 1898. Prior to the invention of particle accelerators, the only way to produce polonium was to extract it over several months from uranium ore.
The first attempt at creating livermorium was from 1976 to 1977 at the LBNL, who bombarded curium-248 with calcium-48, but were not successful. After several failed attempts in 1977, 1998, and 1999 by research groups in Russia, Germany, and the US, livermorium was created successfully in 2000 at the Joint Institute for Nuclear Research by bombarding curium-248 atoms with calcium-48 atoms. The element was known as ununhexium until it was officially named livermorium in 2012.
Names and etymology
In the 19th century, Jons Jacob Berzelius suggested calling the elements in group 16 "amphigens", as the elements in the group formed amphid salts (salts of oxyacids, formerly regarded as composed of two oxides, an acid and a basic oxide). The term received some use in the early 1800s but is now obsolete. The name chalcogen comes from the Greek words (, literally "copper"), and (, born, gender, kindle). It was first used in 1932 by Wilhelm Biltz's group at Leibniz University Hannover, where it was proposed by Werner Fischer. The word "chalcogen" gained popularity in Germany during the 1930s because the term was analogous to "halogen". Although the literal meanings of the modern Greek words imply that chalcogen means "copper-former", this is misleading because the chalcogens have nothing to do with copper in particular. "Ore-former" has been suggested as a better translation, as the vast majority of metal ores are chalcogenides and the word in ancient Greek was associated with metals and metal-bearing rock in general; copper, and its alloy bronze, was one of the first metals to be used by humans.
Oxygen's name comes from the Greek words oxy genes, meaning "acid-forming". Sulfur's name comes from either the Latin word or the Sanskrit word ; both of those terms are ancient words for sulfur. Selenium is named after the Greek goddess of the moon, Selene, to match the previously discovered element tellurium, whose name comes from the Latin word , meaning earth. Polonium is named after Marie Curie's country of birth, Poland. Livermorium is named for the Lawrence Livermore National Laboratory.
Occurrence
The four lightest chalcogens (oxygen, sulfur, selenium, and tellurium) are all primordial elements on Earth. Sulfur and oxygen occur as constituent copper ores and selenium and tellurium occur in small traces in such ores. Polonium forms naturally from the decay of other elements, even though it is not primordial. Livermorium does not occur naturally at all.
Oxygen makes up 21% of the atmosphere by weight, 89% of water by weight, 46% of the Earth's crust by weight, and 65% of the human body. Oxygen also occurs in many minerals, being found in all oxide minerals and hydroxide minerals, and in numerous other mineral groups. Stars of at least eight times the mass of the Sun also produce oxygen in their cores via nuclear fusion. Oxygen is the third-most abundant element in the universe, making up 1% of the universe by weight.
Sulfur makes up 0.035% of the Earth's crust by weight, making it the 17th most abundant element there and makes up 0.25% of the human body. It is a major component of soil. Sulfur makes up 870 parts per million of seawater and about 1 part per billion of the atmosphere. Sulfur can be found in elemental form or in the form of sulfide minerals, sulfate minerals, or sulfosalt minerals. Stars of at least 12 times the mass of the Sun produce sulfur in their cores via nuclear fusion. Sulfur is the tenth most abundant element in the universe, making up 500 parts per million of the universe by weight.
Selenium makes up 0.05 parts per million of the Earth's crust by weight. This makes it the 67th most abundant element in the Earth's crust. Selenium makes up on average 5 parts per million of the soils. Seawater contains around 200 parts per trillion of selenium. The atmosphere contains 1 nanogram of selenium per cubic meter. There are mineral groups known as selenates and selenites, but there are not many minerals in these groups. Selenium is not produced directly by nuclear fusion. Selenium makes up 30 parts per billion of the universe by weight.
There are only 5 parts per billion of tellurium in the Earth's crust and 15 parts per billion of tellurium in seawater. Tellurium is one of the eight or nine least abundant elements in the Earth's crust. There are a few dozen tellurate minerals and telluride minerals, and tellurium occurs in some minerals with gold, such as sylvanite and calaverite. Tellurium makes up 9 parts per billion of the universe by weight.
Polonium only occurs in trace amounts on Earth, via radioactive decay of uranium and thorium. It is present in uranium ores in concentrations of 100 micrograms per metric ton. Very minute amounts of polonium exist in the soil and thus in most food, and thus in the human body. The Earth's crust contains less than 1 part per billion of polonium, making it one of the ten rarest metals on Earth.
Livermorium is always produced artificially in particle accelerators. Even when it is produced, only a small number of atoms are synthesized at a time.
Chalcophile elements
Chalcophile elements are those that remain on or close to the surface because they combine readily with chalcogens other than oxygen, forming compounds which do not sink into the core. Chalcophile ("chalcogen-loving") elements in this context are those metals and heavier nonmetals that have a low affinity for oxygen and prefer to bond with the heavier chalcogen sulfur as sulfides. Because sulfide minerals are much denser than the silicate minerals formed by lithophile elements, chalcophile elements separated below the lithophiles at the time of the first crystallisation of the Earth's crust. This has led to their depletion in the Earth's crust relative to their solar abundances, though this depletion has not reached the levels found with siderophile elements.
Production
Approximately 100 million metric tons of oxygen are produced yearly. Oxygen is most commonly produced by fractional distillation, in which air is cooled to a liquid, then warmed, allowing all the components of air except for oxygen to turn to gases and escape. Fractionally distilling air several times can produce 99.5% pure oxygen. Another method with which oxygen is produced is to send a stream of dry, clean air through a bed of molecular sieves made of zeolite, which absorbs the nitrogen in the air, leaving 90 to 93% pure oxygen.
Sulfur can be mined in its elemental form, although this method is no longer as popular as it used to be. In 1865 a large deposit of elemental sulfur was discovered in the U.S. states of Louisiana and Texas, but it was difficult to extract at the time. In the 1890s, Herman Frasch came up with the solution of liquefying the sulfur with superheated steam and pumping the sulfur up to the surface. These days sulfur is instead more often extracted from oil, natural gas, and tar.
The world production of selenium is around 1500 metric tons per year, out of which roughly 10% is recycled. Japan is the largest producer, producing 800 metric tons of selenium per year. Other large producers include Belgium (300 metric tons per year), the United States (over 200 metric tons per year), Sweden (130 metric tons per year), and Russia (100 metric tons per year). Selenium can be extracted from the waste from the process of electrolytically refining copper. Another method of producing selenium is to farm selenium-gathering plants such as milk vetch. This method could produce three kilograms of selenium per acre, but is not commonly practiced.
Tellurium is mostly produced as a by-product of the processing of copper. Tellurium can also be refined by electrolytic reduction of sodium telluride. The world production of tellurium is between 150 and 200 metric tons per year. The United States is one of the largest producers of tellurium, producing around 50 metric tons per year. Peru, Japan, and Canada are also large producers of tellurium.
Until the creation of nuclear reactors, all polonium had to be extracted from uranium ore. In modern times, most isotopes of polonium are produced by bombarding bismuth with neutrons. Polonium can also be produced by high neutron fluxes in nuclear reactors. Approximately 100 grams of polonium are produced yearly. All the polonium produced for commercial purposes is made in the Ozersk nuclear reactor in Russia. From there, it is taken to Samara, Russia for purification, and from there to St. Petersburg for distribution. The United States is the largest consumer of polonium.
All livermorium is produced artificially in particle accelerators. The first successful production of livermorium was achieved by bombarding curium-248 atoms with calcium-48 atoms. As of 2011, roughly 25 atoms of livermorium had been synthesized.
Applications
Metabolism is the most important source and use of oxygen. Minor industrial uses include Steelmaking (55% of all purified oxygen produced), the chemical industry (25% of all purified oxygen), medical use, water treatment (as oxygen kills some types of bacteria), rocket fuel (in liquid form), and metal cutting.
Most sulfur produced is transformed into sulfur dioxide, which is further transformed into sulfuric acid, a very common industrial chemical. Other common uses include being a key ingredient of gunpowder and Greek fire, and being used to change soil pH. Sulfur is also mixed into rubber to vulcanize it. Sulfur is used in some types of concrete and fireworks. 60% of all sulfuric acid produced is used to generate phosphoric acid. Sulfur is used as a pesticide (specifically as an acaricide and fungicide) on "orchard, ornamental, vegetable, grain, and other crops."
Around 40% of all selenium produced goes to glassmaking. 30% of all selenium produced goes to metallurgy, including manganese production. 15% of all selenium produced goes to agriculture. Electronics such as photovoltaic materials claim 10% of all selenium produced. Pigments account for 5% of all selenium produced. Historically, machines such as photocopiers and light meters used one-third of all selenium produced, but this application is in steady decline.
Tellurium suboxide, a mixture of tellurium and tellurium dioxide, is used in the rewritable data layer of some CD-RW disks and DVD-RW disks. Bismuth telluride is also used in many microelectronic devices, such as photoreceptors. Tellurium is sometimes used as an alternative to sulfur in vulcanized rubber. Cadmium telluride is used as a high-efficiency material in solar panels.
Some of polonium's applications relate to the element's radioactivity. For instance, polonium is used as an alpha-particle generator for research. Polonium alloyed with beryllium provides an efficient neutron source. Polonium is also used in nuclear batteries. Most polonium is used in antistatic devices. Livermorium does not have any uses whatsoever due to its extreme rarity and short half-life.
Organochalcogen compounds are involved in the semiconductor process. These compounds also feature into ligand chemistry and biochemistry. One application of chalcogens themselves is to manipulate redox couples in supramolecular chemistry (chemistry involving non-covalent bond interactions). This application leads on to such applications as crystal packing, assembly of large molecules, and biological recognition of patterns. The secondary bonding interactions of the larger chalcogens, selenium and tellurium, can create organic solvent-holding acetylene nanotubes. Chalcogen interactions are useful for conformational analysis and stereoelectronic effects, among other things. Chalcogenides with through bonds also have applications. For instance, divalent sulfur can stabilize carbanions, cationic centers, and radical. Chalcogens can confer upon ligands (such as DCTO) properties such as being able to transform Cu(II) to Cu(I). Studying chalcogen interactions gives access to radical cations, which are used in mainstream synthetic chemistry. Metallic redox centers of biological importance are tunable by interactions of ligands containing chalcogens, such as methionine and selenocysteine. Also, chalcogen through-bonds can provide insight about the process of electron transfer.
Biological role
Oxygen is needed by almost all organisms for the purpose of generating ATP. It is also a key component of most other biological compounds, such as water, amino acids and DNA. Human blood contains a large amount of oxygen. Human bones contain 28% oxygen. Human tissue contains 16% oxygen. A typical 70-kilogram human contains 43 kilograms of oxygen, mostly in the form of water.
All animals need significant amounts of sulfur. Some amino acids, such as cysteine and methionine contain sulfur. Plant roots take up sulfate ions from the soil and reduce it to sulfide ions. Metalloproteins also use sulfur to attach to useful metal atoms in the body and sulfur similarly attaches itself to poisonous metal atoms like cadmium to haul them to the safety of the liver. On average, humans consume 900 milligrams of sulfur each day. Sulfur compounds, such as those found in skunk spray often have strong odors.
All animals and some plants need trace amounts of selenium, but only for some specialized enzymes. Humans consume on average between 6 and 200 micrograms of selenium per day. Mushrooms and brazil nuts are especially noted for their high selenium content. Selenium in foods is most commonly found in the form of amino acids such as selenocysteine and selenomethionine. Selenium can protect against heavy metal poisoning.
Tellurium is not known to be needed for animal life, although a few fungi can incorporate it in compounds in place of selenium. Microorganisms also absorb tellurium and emit dimethyl telluride. Most tellurium in the blood stream is excreted slowly in urine, but some is converted to dimethyl telluride and released through the lungs. On average, humans ingest about 600 micrograms of tellurium daily. Plants can take up some tellurium from the soil. Onions and garlic have been found to contain as much as 300 parts per million of tellurium in dry weight.
Polonium has no biological role, and is highly toxic on account of being radioactive.
Toxicity
Oxygen is generally nontoxic, but oxygen toxicity has been reported when it is used in high concentrations. In both elemental gaseous form and as a component of water, it is vital to almost all life on Earth. Despite this, liquid oxygen is highly dangerous. Even gaseous oxygen is dangerous in excess. For instance, sports divers have occasionally drowned from convulsions caused by breathing pure oxygen at a depth of more than underwater. Oxygen is also toxic to some bacteria. Ozone, an allotrope of oxygen, is toxic to most life. It can cause lesions in the respiratory tract.
Sulfur is generally nontoxic and is even a vital nutrient for humans. However, in its elemental form it can cause redness in the eyes and skin, a burning sensation and a cough if inhaled, a burning sensation and diarrhoea and/or catharsis if ingested, and can irritate the mucous membranes. An excess of sulfur can be toxic for cows because microbes in the rumens of cows produce toxic hydrogen sulfide upon reaction with sulfur. Many sulfur compounds, such as hydrogen sulfide (H2S) and sulfur dioxide (SO2) are highly toxic.
Selenium is a trace nutrient required by humans on the order of tens or hundreds of micrograms per day. A dose of over 450 micrograms can be toxic, resulting in bad breath and body odor. Extended, low-level exposure, which can occur at some industries, results in weight loss, anemia, and dermatitis. In many cases of selenium poisoning, selenous acid is formed in the body. Hydrogen selenide (H2Se) is highly toxic.
Exposure to tellurium can produce unpleasant side effects. As little as 10 micrograms of tellurium per cubic meter of air can cause notoriously unpleasant breath, described as smelling like rotten garlic. Acute tellurium poisoning can cause vomiting, gut inflammation, internal bleeding, and respiratory failure. Extended, low-level exposure to tellurium causes tiredness and indigestion. Sodium tellurite (Na2TeO3) is lethal in amounts of around 2 grams.
Polonium is dangerous as an alpha particle emitter. If ingested, polonium-210 is a million times as toxic as hydrogen cyanide by weight; it has been used as a murder weapon in the past, most famously to kill Alexander Litvinenko. Polonium poisoning can cause nausea, vomiting, anorexia, and lymphopenia. It can also damage hair follicles and white blood cells. Polonium-210 is only dangerous if ingested or inhaled because its alpha particle emissions cannot penetrate human skin. Polonium-209 is also toxic, and can cause leukemia.
Amphid salts
Amphid salts was a name given by Jons Jacob Berzelius in the 19th century for chemical salts derived from the 16th group of the periodic table which included oxygen, sulfur, selenium, and tellurium. The term received some use in the early 1800s but is now obsolete. The current term in use for the 16th group is chalcogens.
| Physical sciences | Group 16 | Chemistry |
5906 | https://en.wikipedia.org/wiki/Carbon%20dioxide | Carbon dioxide | Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature and at normally-encountered concentrations it is odorless. As the source of carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.042% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.028%. Burning fossil fuels is the main cause of these increased concentrations, which are the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological features. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result in the being released back into the atmosphere. is eventually sequestered (stored for the long term) in rocks and organic deposits like coal, petroleum and natural gas.
Nearly all produced by humans goes into the atmosphere. Less than 1% of produced annually is put to commercial use, mostly in the fertilizer industry and in the oil and gas industry for enhanced oil recovery. Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.
Chemical and physical properties
Structure, bonding and molecular vibrations
The symmetry of a carbon dioxide molecule is linear and centrosymmetric at its equilibrium geometry. The length of the carbon–oxygen bond in carbon dioxide is 116.3 pm, noticeably shorter than the roughly 140 pm length of a typical single C–O bond, and shorter than most other C–O multiply bonded functional groups such as carbonyls. Since it is centrosymmetric, the molecule has no electric dipole moment.
As a linear triatomic molecule, has four vibrational modes as shown in the diagram. In the symmetric and the antisymmetric stretching modes, the atoms move along the axis of the molecule. There are two bending modes, which are degenerate, meaning that they have the same frequency and same energy, because of the symmetry of the molecule. When a molecule touches a surface or touches another molecule, the two bending modes can differ in frequency because the interaction is different for the two modes. Some of the vibrational modes are observed in the infrared (IR) spectrum: the antisymmetric stretching mode at wavenumber 2349 cm−1 (wavelength 4.25 μm) and the degenerate pair of bending modes at 667 cm−1 (wavelength 15.0 μm). The symmetric stretching mode does not create an electric dipole so is not observed in IR spectroscopy, but it is detected in Raman spectroscopy at 1388 cm−1 (wavelength 7.20 μm), with a Fermi resonance doublet at 1285 cm−1.
In the gas phase, carbon dioxide molecules undergo significant vibrational motions and do not keep a fixed structure. However, in a Coulomb explosion imaging experiment, an instantaneous image of the molecular structure can be deduced. Such an experiment has been performed for carbon dioxide. The result of this experiment, and the conclusion of theoretical calculations based on an ab initio potential energy surface of the molecule, is that none of the molecules in the gas phase are ever exactly linear. This counter-intuitive result is trivially due to the fact that the nuclear motion volume element vanishes for linear geometries. This is so for all molecules except diatomic molecules.
In aqueous solution
Carbon dioxide is soluble in water, in which it reversibly forms (carbonic acid), which is a weak acid, because its ionization in water is incomplete.
The hydration equilibrium constant of carbonic acid is, at 25 °C:
Hence, the majority of the carbon dioxide is not converted into carbonic acid, but remains as molecules, not affecting the pH.
The relative concentrations of , , and the deprotonated forms (bicarbonate) and (carbonate) depend on the pH. As shown in a Bjerrum plot, in neutral or slightly alkaline water (pH > 6.5), the bicarbonate form predominates (>50%) becoming the most prevalent (>95%) at the pH of seawater. In very alkaline water (pH > 10.4), the predominant (>50%) form is carbonate. The oceans, being mildly alkaline with typical pH = 8.2–8.5, contain about 120 mg of bicarbonate per liter.
Being diprotic, carbonic acid has two acid dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion ():
Ka1 = 2.5 × 10−4 mol/L; pKa1 = 3.6 at 25 °C.
This is the true first acid dissociation constant, defined as
where the denominator includes only covalently bound and does not include hydrated (aq). The much smaller and often-quoted value near 4.16 × 10−7 (or pKa1 = 6.38) is an apparent value calculated on the (incorrect) assumption that all dissolved is present as carbonic acid, so that
Since most of the dissolved remains as molecules, Ka1(apparent) has a much larger denominator and a much smaller value than the true Ka1.
The bicarbonate ion is an amphoteric species that can act as an acid or as a base, depending on pH of the solution. At high pH, it dissociates significantly into the carbonate ion ():
Ka2 = 4.69 × 10−11 mol/L; pKa2 = 10.329
In organisms, carbonic acid production is catalysed by the enzyme known as carbonic anhydrase.
In addition to altering its acidity, the presence of carbon dioxide in water also affects its electrical properties. When carbon dioxide dissolves in desalinated water, the electrical conductivity increases significantly from below 1 μS/cm to nearly 30 μS/cm. When heated, the water begins to gradually lose the conductivity induced by the presence of , especially noticeable as temperatures exceed 30 °C.
The temperature dependence of the electrical conductivity of fully deionized water without saturation is comparably low in relation to these data.
Chemical reactions
is a potent electrophile having an electrophilic reactivity that is comparable to benzaldehyde or strongly electrophilic α,β-unsaturated carbonyl compounds. However, unlike electrophiles of similar reactivity, the reactions of nucleophiles with are thermodynamically less favored and are often found to be highly reversible. The reversible reaction of carbon dioxide with amines to make carbamates is used in scrubbers and has been suggested as a possible starting point for carbon capture and storage by amine gas treating.
Only very strong nucleophiles, like the carbanions provided by Grignard reagents and organolithium compounds react with to give carboxylates:
where M = Li or MgBr and R = alkyl or aryl.
In metal carbon dioxide complexes, serves as a ligand, which can facilitate the conversion of to other chemicals.
The reduction of to CO is ordinarily a difficult and slow reaction:
The redox potential for this reaction near pH 7 is about −0.53 V versus the standard hydrogen electrode. The nickel-containing enzyme carbon monoxide dehydrogenase catalyses this process.
Photoautotrophs (i.e. plants and cyanobacteria) use the energy contained in sunlight to photosynthesize simple sugars from absorbed from the air and water:
Physical properties
Carbon dioxide is colorless. At low concentrations, the gas is odorless; however, at sufficiently high concentrations, it has a sharp, acidic odor. At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m3, about 1.53 times that of air.
Carbon dioxide has no liquid state at pressures below 0.51795(10) MPa (5.11177(99) atm). At a pressure of 1 atm (0.101325 MPa), the gas deposits directly to a solid at temperatures below 194.6855(30) K (−78.4645(30) °C) and the solid sublimes directly to a gas above this temperature. In its solid state, carbon dioxide is commonly called dry ice.
Liquid carbon dioxide forms only at pressures above 0.51795(10) MPa (5.11177(99) atm); the triple point of carbon dioxide is 216.592(3) K (−56.558(3) °C) at 0.51795(10) MPa (5.11177(99) atm) (see phase diagram). The critical point is 304.128(15) K (30.978(15) °C) at 7.3773(30) MPa (72.808(30) atm). Another form of solid carbon dioxide observed at high pressure is an amorphous glass-like solid. This form of glass, called carbonia, is produced by supercooling heated at extreme pressures (40–48 GPa, or about 400,000 atmospheres) in a diamond anvil. This discovery confirmed the theory that carbon dioxide could exist in a glass state similar to other members of its elemental family, like silicon dioxide (silica glass) and germanium dioxide. Unlike silica and germania glasses, however, carbonia glass is not stable at normal pressures and reverts to gas when pressure is released.
At temperatures and pressures above the critical point, carbon dioxide behaves as a supercritical fluid known as supercritical carbon dioxide.
Table of thermal and physical properties of saturated liquid carbon dioxide:
Table of thermal and physical properties of carbon dioxide () at atmospheric pressure:
Biological role
Carbon dioxide is an end product of cellular respiration in organisms that obtain energy by breaking down sugars, fats and amino acids with oxygen as part of their metabolism. This includes all plants, algae and animals and aerobic fungi and bacteria. In vertebrates, the carbon dioxide travels in the blood from the body's tissues to the skin (e.g., amphibians) or the gills (e.g., fish), from where it dissolves in the water, or to the lungs from where it is exhaled. During active photosynthesis, plants can absorb more carbon dioxide from the atmosphere than they release in respiration.
Photosynthesis and carbon fixation
Carbon fixation is a biochemical process by which atmospheric carbon dioxide is incorporated by plants, algae and cyanobacteria into energy-rich organic molecules such as glucose, thus creating their own food by photosynthesis. Photosynthesis uses carbon dioxide and water to produce sugars from which other organic compounds can be constructed, and oxygen is produced as a by-product.
Ribulose-1,5-bisphosphate carboxylase oxygenase, commonly abbreviated to RuBisCO, is the enzyme involved in the first major step of carbon fixation, the production of two molecules of 3-phosphoglycerate from and ribulose bisphosphate, as shown in the diagram at left.
RuBisCO is thought to be the single most abundant protein on Earth.
Phototrophs use the products of their photosynthesis as internal food sources and as raw material for the biosynthesis of more complex organic molecules, such as polysaccharides, nucleic acids, and proteins. These are used for their own growth, and also as the basis of the food chains and webs that feed other organisms, including animals such as ourselves. Some important phototrophs, the coccolithophores synthesise hard calcium carbonate scales. A globally significant species of coccolithophore is Emiliania huxleyi whose calcite scales have formed the basis of many sedimentary rocks such as limestone, where what was previously atmospheric carbon can remain fixed for geological timescales.
Plants can grow as much as 50% faster in concentrations of 1,000 ppm when compared with ambient conditions, though this assumes no change in climate and no limitation on other nutrients. Elevated levels cause increased growth reflected in the harvestable yield of crops, with wheat, rice and soybean all showing increases in yield of 12–14% under elevated in FACE experiments.
Increased atmospheric concentrations result in fewer stomata developing on plants which leads to reduced water usage and increased water-use efficiency. Studies using FACE have shown that enrichment leads to decreased concentrations of micronutrients in crop plants. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein.
The concentration of secondary metabolites such as phenylpropanoids and flavonoids can also be altered in plants exposed to high concentrations of .
Plants also emit during respiration, and so the majority of plants and algae, which use C3 photosynthesis, are only net absorbers during the day. Though a growing forest will absorb many tons of each year, a mature forest will produce as much from respiration and decomposition of dead specimens (e.g., fallen branches) as is used in photosynthesis in growing plants. Contrary to the long-standing view that they are carbon neutral, mature forests can continue to accumulate carbon and remain valuable carbon sinks, helping to maintain the carbon balance of Earth's atmosphere. Additionally, and crucially to life on earth, photosynthesis by phytoplankton consumes dissolved in the upper ocean and thereby promotes the absorption of from the atmosphere.
Toxicity
Carbon dioxide content in fresh air (averaged between sea-level and 10 kPa level, i.e., about altitude) varies between 0.036% (360 ppm) and 0.041% (412 ppm), depending on the location.
In humans, exposure to CO2 at concentrations greater than 5% causes the development of hypercapnia and respiratory acidosis. Concentrations of 7% to 10% (70,000 to 100,000 ppm) may cause suffocation, even in the presence of sufficient oxygen, manifesting as dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. Concentrations of more than 10% may cause convulsions, coma, and death. CO2 levels of more than 30% act rapidly leading to loss of consciousness in seconds.
Because it is heavier than air, in locations where the gas seeps from the ground (due to sub-surface volcanic or geothermal activity) in relatively high concentrations, without the dispersing effects of wind, it can collect in sheltered/pocketed locations below average ground level, causing animals located therein to be suffocated. Carrion feeders attracted to the carcasses are then also killed. Children have been killed in the same way near the city of Goma by emissions from the nearby volcano Mount Nyiragongo. The Swahili term for this phenomenon is .
Adaptation to increased concentrations of occurs in humans, including modified breathing and kidney bicarbonate production, in order to balance the effects of blood acidification (acidosis). Several studies suggested that 2.0 percent inspired concentrations could be used for closed air spaces (e.g. a submarine) since the adaptation is physiological and reversible, as deterioration in performance or in normal physical activity does not happen at this level of exposure for five days. Yet, other studies show a decrease in cognitive function even at much lower levels. Also, with ongoing respiratory acidosis, adaptation or compensatory mechanisms will be unable to reverse the condition.
Below 1%
There are few studies of the health effects of long-term continuous exposure on humans and animals at levels below 1%. Occupational exposure limits have been set in the United States at 0.5% (5000 ppm) for an eight-hour period. At this concentration, International Space Station crew experienced headaches, lethargy, mental slowness, emotional irritation, and sleep disruption. Studies in animals at 0.5% have demonstrated kidney calcification and bone loss after eight weeks of exposure. A study of humans exposed in 2.5 hour sessions demonstrated significant negative effects on cognitive abilities at concentrations as low as 0.1% (1000ppm) likely due to induced increases in cerebral blood flow. Another study observed a decline in basic activity level and information usage at 1000 ppm, when compared to 500 ppm.
However a review of the literature found that a reliable subset of studies on the phenomenon of carbon dioxide induced cognitive impairment to only show a small effect on high-level decision making (for concentrations below 5000 ppm). Most of the studies were confounded by inadequate study designs, environmental comfort, uncertainties in exposure doses and differing cognitive assessments used. Similarly a study on the effects of the concentration of in motorcycle helmets has been criticized for having dubious methodology in not noting the self-reports of motorcycle riders and taking measurements using mannequins. Further when normal motorcycle conditions were achieved (such as highway or city speeds) or the visor was raised the concentration of declined to safe levels (0.2%).
Ventilation
Poor ventilation is one of the main causes of excessive concentrations in closed spaces, leading to poor indoor air quality. Carbon dioxide differential above outdoor concentrations at steady state conditions (when the occupancy and ventilation system operation are sufficiently long that concentration has stabilized) are sometimes used to estimate ventilation rates per person. Higher concentrations are associated with occupant health, comfort and performance degradation. ASHRAE Standard 62.1–2007 ventilation rates may result in indoor concentrations up to 2,100 ppm above ambient outdoor conditions. Thus if the outdoor concentration is 400 ppm, indoor concentrations may reach 2,500 ppm with ventilation rates that meet this industry consensus standard. Concentrations in poorly ventilated spaces can be found even higher than this (range of 3,000 or 4,000 ppm).
Miners, who are particularly vulnerable to gas exposure due to insufficient ventilation, referred to mixtures of carbon dioxide and nitrogen as "blackdamp", "choke damp" or "stythe". Before more effective technologies were developed, miners would frequently monitor for dangerous levels of blackdamp and other gases in mine shafts by bringing a caged canary with them as they worked. The canary is more sensitive to asphyxiant gases than humans, and as it became unconscious would stop singing and fall off its perch. The Davy lamp could also detect high levels of blackdamp (which sinks, and collects near the floor) by burning less brightly, while methane, another suffocating gas and explosion risk, would make the lamp burn more brightly.
In February 2020, three people died from suffocation at a party in Moscow when dry ice (frozen ) was added to a swimming pool to cool it down. A similar accident occurred in 2018 when a woman died from fumes emanating from the large amount of dry ice she was transporting in her car.
Indoor air
Humans spend more and more time in a confined atmosphere (around 80-90% of the time in a building or vehicle). According to the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) and various actors in France, the rate in the indoor air of buildings (linked to human or animal occupancy and the presence of combustion installations), weighted by air renewal, is "usually between about 350 and 2,500 ppm".
In homes, schools, nurseries and offices, there are no systematic relationships between the levels of and other pollutants, and indoor is statistically not a good predictor of pollutants linked to outdoor road (or air, etc.) traffic. is the parameter that changes the fastest (with hygrometry and oxygen levels when humans or animals are gathered in a closed or poorly ventilated room). In poor countries, many open hearths are sources of and CO emitted directly into the living environment.
Outdoor areas with elevated concentrations
Local concentrations of carbon dioxide can reach high values near strong sources, especially those that are isolated by surrounding terrain. At the Bossoleto hot spring near Rapolano Terme in Tuscany, Italy, situated in a bowl-shaped depression about in diameter, concentrations of rise to above 75% overnight, sufficient to kill insects and small animals. After sunrise the gas is dispersed by convection. High concentrations of produced by disturbance of deep lake water saturated with are thought to have caused 37 fatalities at Lake Monoun, Cameroon in 1984 and 1700 casualties at Lake Nyos, Cameroon in 1986.
Human physiology
Content
The body produces approximately of carbon dioxide per day per person, containing of carbon. In humans, this carbon dioxide is carried through the venous system and is breathed out through the lungs, resulting in lower concentrations in the arteries. The carbon dioxide content of the blood is often given as the partial pressure, which is the pressure which carbon dioxide would have had if it alone occupied the volume. In humans, the blood carbon dioxide contents are shown in the adjacent table.
Transport in the blood
is carried in blood in three different ways. Exact percentages vary between arterial and venous blood.
Majority (about 70% to 80%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells, by the reaction:
5–10% is dissolved in blood plasma
5–10% is bound to hemoglobin as carbamino compounds
Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of decreases the amount of oxygen that is bound for a given partial pressure of oxygen. This is known as the Haldane Effect, and is important in the transport of carbon dioxide from the tissues to the lungs. Conversely, a rise in the partial pressure of or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect.
Regulation of respiration
Carbon dioxide is one of the mediators of local autoregulation of blood supply. If its concentration is high, the capillaries expand to allow a greater blood flow to that tissue.
Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis.
Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. As a result, breathing low-pressure air or a gas mixture with no oxygen at all (such as pure nitrogen) can lead to loss of consciousness without ever experiencing air hunger. This is especially perilous for high-altitude fighter pilots. It is also why flight attendants instruct passengers, in case of loss of cabin pressure, to apply the oxygen mask to themselves first before helping others; otherwise, one risks losing consciousness.
The respiratory centers try to maintain an arterial pressure of 40 mmHg. With intentional hyperventilation, the content of arterial blood may be lowered to 10–20 mmHg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving.
Concentrations and role in the environment
Atmosphere
Oceans
Ocean acidification
Carbon dioxide dissolves in the ocean to form carbonic acid (), bicarbonate (), and carbonate (). There is about fifty times as much carbon dioxide dissolved in the oceans as exists in the atmosphere. The oceans act as an enormous carbon sink, and have taken up about a third of emitted by human activity.
Hydrothermal vents
Carbon dioxide is also introduced into the oceans through hydrothermal vents. The Champagne hydrothermal vent, found at the Northwest Eifuku volcano in the Mariana Trench, produces almost pure liquid carbon dioxide, one of only two known sites in the world as of 2004, the other being in the Okinawa Trough. The finding of a submarine lake of liquid carbon dioxide in the Okinawa Trough was reported in 2006.
Sources
The burning of fossil fuels for energy produces 36.8 billion tonnes of per year as of 2023. Nearly all of this goes into the atmosphere, where approximately half is subsequently absorbed into natural carbon sinks. Less than 1% of produced annually is put to commercial use.
Biological processes
Carbon dioxide is a by-product of the fermentation of sugar in the brewing of beer, whisky and other alcoholic beverages and in the production of bioethanol. Yeast metabolizes sugar to produce and ethanol, also known as alcohol, as follows:
All aerobic organisms produce when they oxidize carbohydrates, fatty acids, and proteins. The large number of reactions involved are exceedingly complex and not described easily. Refer to cellular respiration, anaerobic respiration and photosynthesis. The equation for the respiration of glucose and other monosaccharides is:
Anaerobic organisms decompose organic material producing methane and carbon dioxide together with traces of other compounds. Regardless of the type of organic material, the production of gases follows well defined kinetic pattern. Carbon dioxide comprises about 40–45% of the gas that emanates from decomposition in landfills (termed "landfill gas"). Most of the remaining 50–55% is methane.
Combustion
The combustion of all carbon-based fuels, such as methane (natural gas), petroleum distillates (gasoline, diesel, kerosene, propane), coal, wood and generic organic matter produces carbon dioxide and, except in the case of pure carbon, water. As an example, the chemical reaction between methane and oxygen:
Iron is reduced from its oxides with coke in a blast furnace, producing pig iron and carbon dioxide:
By-product from hydrogen production
Carbon dioxide is a byproduct of the industrial production of hydrogen by steam reforming and the water gas shift reaction in ammonia production. These processes begin with the reaction of water and natural gas (mainly methane).
Thermal decomposition of limestone
It is produced by thermal decomposition of limestone, by heating (calcining) at about , in the manufacture of quicklime (calcium oxide, CaO), a compound that has many industrial uses:
Acids liberate from most metal carbonates. Consequently, it may be obtained directly from natural carbon dioxide springs, where it is produced by the action of acidified water on limestone or dolomite. The reaction between hydrochloric acid and calcium carbonate (limestone or chalk) is shown below:
The carbonic acid () then decomposes to water and :
Such reactions are accompanied by foaming or bubbling, or both, as the gas is released. They have widespread uses in industry because they can be used to neutralize waste acid streams.
Commercial uses
Around 230 Mt of are used each year, mostly in the fertiliser industry for urea production (130 million tonnes) and in the oil and gas industry for enhanced oil recovery (70 to 80 million tonnes). Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.
Technology exists to capture CO2 from industrial flue gas or from the air. Research is ongoing on ways to use captured CO2 in products and some of these processes have been deployed commercially. However, the potential to use products is very small compared to the total volume of CO2 that could foreseeably be captured. The vast majority of captured CO2 is considered a waste product and sequestered in underground geologic formations.
Precursor to chemicals
In the chemical industry, carbon dioxide is mainly consumed as an ingredient in the production of urea, with a smaller fraction being used to produce methanol and a range of other products. Some carboxylic acid derivatives such as sodium salicylate are prepared using by the Kolbe–Schmitt reaction.
Captured CO2 could be to produce methanol or electrofuels. To be carbon-neutral, the CO2 would need to come from bioenergy production or direct air capture.
Fossil fuel recovery
Carbon dioxide is used in enhanced oil recovery where it is injected into or adjacent to producing oil wells, usually under supercritical conditions, when it becomes miscible with the oil. This approach can increase original oil recovery by reducing residual oil saturation by 7–23% additional to primary extraction. It acts as both a pressurizing agent and, when dissolved into the underground crude oil, significantly reduces its viscosity, and changing surface chemistry enabling the oil to flow more rapidly through the reservoir to the removal well.
Most CO2 injected in CO2-EOR projects comes from naturally occurring underground CO2 deposits. Some CO2 used in EOR is captured from industrial facilities such as natural gas processing plants, using carbon capture technology and transported to the oilfield in pipelines.
Agriculture
Plants require carbon dioxide to conduct photosynthesis. The atmospheres of greenhouses may (if of large size, must) be enriched with additional to sustain and increase the rate of plant growth. At very high concentrations (100 times atmospheric concentration, or greater), carbon dioxide can be toxic to animal life, so raising the concentration to 10,000 ppm (1%) or higher for several hours will eliminate pests such as whiteflies and spider mites in a greenhouse. Some plants respond more favorably to rising carbon dioxide concentrations than others, which can lead to vegetation regime shifts like woody plant encroachment.
Foods
Carbon dioxide is a food additive used as a propellant and acidity regulator in the food industry. It is approved for usage in the EU (listed as E number E290), US, Australia and New Zealand (listed by its INS number 290).
A candy called Pop Rocks is pressurized with carbon dioxide gas at about . When placed in the mouth, it dissolves (just like other hard candy) and releases the gas bubbles with an audible pop.
Leavening agents cause dough to rise by producing carbon dioxide. Baker's yeast produces carbon dioxide by fermentation of sugars within the dough, while chemical leaveners such as baking powder and baking soda release carbon dioxide when heated or if exposed to acids.
Beverages
Carbon dioxide is used to produce carbonated soft drinks and soda water. Traditionally, the carbonation of beer and sparkling wine came about through natural fermentation, but many manufacturers carbonate these drinks with carbon dioxide recovered from the fermentation process. In the case of bottled and kegged beer, the most common method used is carbonation with recycled carbon dioxide. With the exception of British real ale, draught beer is usually transferred from kegs in a cold room or cellar to dispensing taps on the bar using pressurized carbon dioxide, sometimes mixed with nitrogen.
The taste of soda water (and related taste sensations in other carbonated beverages) is an effect of the dissolved carbon dioxide rather than the bursting bubbles of the gas. Carbonic anhydrase 4 converts carbon dioxide to carbonic acid leading to a sour taste, and also the dissolved carbon dioxide induces a somatosensory response.
Winemaking
Carbon dioxide in the form of dry ice is often used during the cold soak phase in winemaking to cool clusters of grapes quickly after picking to help prevent spontaneous fermentation by wild yeast. The main advantage of using dry ice over water ice is that it cools the grapes without adding any additional water that might decrease the sugar concentration in the grape must, and thus the alcohol concentration in the finished wine. Carbon dioxide is also used to create a hypoxic environment for carbonic maceration, the process used to produce Beaujolais wine.
Carbon dioxide is sometimes used to top up wine bottles or other storage vessels such as barrels to prevent oxidation, though it has the problem that it can dissolve into the wine, making a previously still wine slightly fizzy. For this reason, other gases such as nitrogen or argon are preferred for this process by professional wine makers.
Stunning animals
Carbon dioxide is often used to "stun" animals before slaughter. "Stunning" may be a misnomer, as the animals are not knocked out immediately and may suffer distress.
Inert gas
Carbon dioxide is one of the most commonly used compressed gases for pneumatic (pressurized gas) systems in portable pressure tools. Carbon dioxide is also used as an atmosphere for welding, although in the welding arc, it reacts to oxidize most metals. Use in the automotive industry is common despite significant evidence that welds made in carbon dioxide are more brittle than those made in more inert atmospheres. When used for MIG welding, use is sometimes referred to as MAG welding, for Metal Active Gas, as can react at these high temperatures. It tends to produce a hotter puddle than truly inert atmospheres, improving the flow characteristics. Although, this may be due to atmospheric reactions occurring at the puddle site. This is usually the opposite of the desired effect when welding, as it tends to embrittle the site, but may not be a problem for general mild steel welding, where ultimate ductility is not a major concern.
Carbon dioxide is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately , allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminium capsules of are also sold as supplies of compressed gas for air guns, paintball markers/guns, inflating bicycle tires, and for making carbonated water. High concentrations of carbon dioxide can also be used to kill pests. Liquid carbon dioxide is used in supercritical drying of some food products and technological materials, in the preparation of specimens for scanning electron microscopy and in the decaffeination of coffee beans.
Fire extinguisher
Carbon dioxide can be used to extinguish flames by flooding the environment around the flame with the gas. It does not itself react to extinguish the flame, but starves the flame of oxygen by displacing it. Some fire extinguishers, especially those designed for electrical fires, contain liquid carbon dioxide under pressure. Carbon dioxide extinguishers work well on small flammable liquid and electrical fires, but not on ordinary combustible fires, because they do not cool the burning substances significantly, and when the carbon dioxide disperses, they can catch fire upon exposure to atmospheric oxygen. They are mainly used in server rooms.
Carbon dioxide has also been widely used as an extinguishing agent in fixed fire-protection systems for local application of specific hazards and total flooding of a protected space. International Maritime Organization standards recognize carbon dioxide systems for fire protection of ship holds and engine rooms. Carbon dioxide-based fire-protection systems have been linked to several deaths, because it can cause suffocation in sufficiently high concentrations. A review of systems identified 51 incidents between 1975 and the date of the report (2000), causing 72 deaths and 145 injuries.
Supercritical as solvent
Liquid carbon dioxide is a good solvent for many lipophilic organic compounds and is used to decaffeinate coffee. Carbon dioxide has attracted attention in the pharmaceutical and other chemical processing industries as a less toxic alternative to more traditional solvents such as organochlorides. It is also used by some dry cleaners for this reason. It is used in the preparation of some aerogels because of the properties of supercritical carbon dioxide.
Refrigerant
Liquid and solid carbon dioxide are important refrigerants, especially in the food industry, where they are employed during the transportation and storage of ice cream and other frozen foods. Solid carbon dioxide is called "dry ice" and is used for small shipments where refrigeration equipment is not practical. Solid carbon dioxide is always below at regular atmospheric pressure, regardless of the air temperature.
Liquid carbon dioxide (industry nomenclature R744 or R-744) was used as a refrigerant prior to the use of dichlorodifluoromethane (R12, a chlorofluorocarbon (CFC) compound). might enjoy a renaissance because one of the main substitutes to CFCs, 1,1,1,2-tetrafluoroethane (R134a, a hydrofluorocarbon (HFC) compound) contributes to climate change more than does. physical properties are highly favorable for cooling, refrigeration, and heating purposes, having a high volumetric cooling capacity. Due to the need to operate at pressures of up to , systems require highly mechanically resistant reservoirs and components that have already been developed for mass production in many sectors. In automobile air conditioning, in more than 90% of all driving conditions for latitudes higher than 50°, (R744) operates more efficiently than systems using HFCs (e.g., R134a). Its environmental advantages (GWP of 1, non-ozone depleting, non-toxic, non-flammable) could make it the future working fluid to replace current HFCs in cars, supermarkets, and heat pump water heaters, among others. Coca-Cola has fielded -based beverage coolers and the U.S. Army is interested in refrigeration and heating technology.
Minor uses
Carbon dioxide is the lasing medium in a carbon-dioxide laser, which is one of the earliest type of lasers.
Carbon dioxide can be used as a means of controlling the pH of swimming pools, by continuously adding gas to the water, thus keeping the pH from rising. Among the advantages of this is the avoidance of handling (more hazardous) acids. Similarly, it is also used in the maintaining reef aquaria, where it is commonly used in calcium reactors to temporarily lower the pH of water being passed over calcium carbonate in order to allow the calcium carbonate to dissolve into the water more freely, where it is used by some corals to build their skeleton.
Used as the primary coolant in the British advanced gas-cooled reactor for nuclear power generation.
Carbon dioxide induction is commonly used for the euthanasia of laboratory research animals. Methods to administer include placing animals directly into a closed, prefilled chamber containing , or exposure to a gradually increasing concentration of . The American Veterinary Medical Association's 2020 guidelines for carbon dioxide induction state that a displacement rate of 30–70% of the chamber or cage volume per minute is optimal for the humane euthanasia of small rodents. Percentages of vary for different species, based on identified optimal percentages to minimize distress.
Carbon dioxide is also used in several related cleaning and surface-preparation techniques.
History of discovery
Carbon dioxide was the first gas to be described as a discrete substance. In about 1640, the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal had been transmuted into an invisible substance he termed a "gas" (from Greek "chaos") or "wild spirit" (spiritus sylvestris).
The properties of carbon dioxide were further studied in the 1750s by the Scottish physician Joseph Black. He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed air". He observed that the fixed air was denser than air and supported neither flame nor animal life. Black also found that when bubbled through limewater (a saturated aqueous solution of calcium hydroxide), it would precipitate calcium carbonate. He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial fermentation. In 1772, English chemist Joseph Priestley published a paper entitled Impregnating Water with Fixed Air in which he described a process of dripping sulfuric acid (or oil of vitriol as Priestley knew it) on chalk in order to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas.
Carbon dioxide was first liquefied (at elevated pressures) in 1823 by Humphry Davy and Michael Faraday. The earliest description of solid carbon dioxide (dry ice) was given by the French inventor Adrien-Jean-Pierre Thilorier, who in 1835 opened a pressurized container of liquid carbon dioxide, only to find that the cooling produced by the rapid evaporation of the liquid yielded a "snow" of solid .
Carbon dioxide in combination with nitrogen was known from earlier times as Blackdamp, stythe or choke damp. Along with the other types of damp it was encountered in mining operations and well sinking. Slow oxidation of coal and biological processes replaced the oxygen to create a suffocating mixture of nitrogen and carbon dioxide.
| Physical sciences | Chemistry | null |
5910 | https://en.wikipedia.org/wiki/Cyanide | Cyanide | In chemistry, cyanide () is a chemical compound that contains a functional group. This group, known as the cyano group, consists of a carbon atom triple-bonded to a nitrogen atom.
In inorganic cyanides, the cyanide group is present as the cyanide anion . This anion is extremely poisonous. Soluble salts such as sodium cyanide (NaCN) and potassium cyanide (KCN) are highly toxic. Hydrocyanic acid, also known as hydrogen cyanide, or HCN, is a highly volatile liquid that is produced on a large scale industrially. It is obtained by acidification of cyanide salts.
Organic cyanides are usually called nitriles. In nitriles, the group is linked by a single covalent bond to carbon. For example, in acetonitrile (), the cyanide group is bonded to methyl (). Although nitriles generally do not release cyanide ions, the cyanohydrins do and are thus toxic.
Bonding
The cyanide ion is isoelectronic with carbon monoxide and with molecular nitrogen N≡N. A triple bond exists between C and N. The negative charge is concentrated on carbon C.
Occurrence
In nature
Cyanides are produced by certain bacteria, fungi, and algae. It is an antifeedant in a number of plants. Cyanides are found in substantial amounts in certain seeds and fruit stones, e.g., those of bitter almonds, apricots, apples, and peaches. Chemical compounds that can release cyanide are known as cyanogenic compounds. In plants, cyanides are usually bound to sugar molecules in the form of cyanogenic glycosides and defend the plant against herbivores. Cassava roots (also called manioc), an important potato-like food grown in tropical countries (and the base from which tapioca is made), also contain cyanogenic glycosides.
The Madagascar bamboo Cathariostachys madagascariensis produces cyanide as a deterrent to grazing. In response, the golden bamboo lemur, which eats the bamboo, has developed a high tolerance to cyanide.
The hydrogenase enzymes contain cyanide ligands attached to iron in their active sites. The biosynthesis of cyanide in the NiFe hydrogenases proceeds from carbamoyl phosphate, which converts to cysteinyl thiocyanate, the donor.
Interstellar medium
The cyanide radical •CN has been identified in interstellar space. Cyanogen, , is used to measure the temperature of interstellar gas clouds.
Pyrolysis and combustion product
Hydrogen cyanide is produced by the combustion or pyrolysis of certain materials under oxygen-deficient conditions. For example, it can be detected in the exhaust of internal combustion engines and tobacco smoke. Certain plastics, especially those derived from acrylonitrile, release hydrogen cyanide when heated or burnt.
Organic derivatives
In IUPAC nomenclature, organic compounds that have a functional group are called nitriles. An example of a nitrile is acetonitrile, . Nitriles usually do not release cyanide ions. A functional group with a hydroxyl and cyanide bonded to the same carbon atom is called cyanohydrin (). Unlike nitriles, cyanohydrins do release poisonous hydrogen cyanide.
Reactions
Protonation
Cyanide is basic. The pKa of hydrogen cyanide is 9.21. Thus, addition of acids stronger than hydrogen cyanide to solutions of cyanide salts releases hydrogen cyanide.
Hydrolysis
Cyanide is unstable in water, but the reaction is slow until about 170 °C. It undergoes hydrolysis to give ammonia and formate, which are far less toxic than cyanide:
Cyanide hydrolase is an enzyme that catalyzes this reaction.
Alkylation
Because of the cyanide anion's high nucleophilicity, cyano groups are readily introduced into organic molecules by displacement of a halide group (e.g., the chloride on methyl chloride). In general, organic cyanides are called nitriles. In organic synthesis, cyanide is a C-1 synthon; i.e., it can be used to lengthen a carbon chain by one, while retaining the ability to be functionalized.
Redox
The cyanide ion is a reductant and is oxidized by strong oxidizing agents such as molecular chlorine (), hypochlorite (), and hydrogen peroxide (). These oxidizers are used to destroy cyanides in effluents from gold mining.
Metal complexation
The cyanide anion reacts with transition metals to form M-CN bonds. This reaction is the basis of cyanide's toxicity. The high affinities of metals for this anion can be attributed to its negative charge, compactness, and ability to engage in π-bonding.
Among the most important cyanide coordination compounds are the potassium ferrocyanide and the pigment Prussian blue, which are both essentially nontoxic due to the tight binding of the cyanides to a central iron atom.
Prussian blue was first accidentally made around 1706, by heating substances containing iron and carbon and nitrogen, and other cyanides made subsequently (and named after it). Among its many uses, Prussian blue gives the blue color to blueprints, bluing, and cyanotypes.
Manufacture
The principal process used to manufacture cyanides is the Andrussow process in which gaseous hydrogen cyanide is produced from methane and ammonia in the presence of oxygen and a platinum catalyst.
Sodium cyanide, the precursor to most cyanides, is produced by treating hydrogen cyanide with sodium hydroxide:
Toxicity
Among the most toxic cyanides are hydrogen cyanide (), sodium cyanide (), potassium cyanide (), and calcium cyanide (). The cyanide anion is an inhibitor of the enzyme cytochrome c oxidase (also known as aa3), the fourth complex of the electron transport chain found in the inner membrane of the mitochondria of eukaryotic cells. It attaches to the iron within this protein. The binding of cyanide to this enzyme prevents transport of electrons from cytochrome c to oxygen. As a result, the electron transport chain is disrupted, meaning that the cell can no longer aerobically produce ATP for energy. Tissues that depend highly on aerobic respiration, such as the central nervous system and the heart, are particularly affected. This is an example of histotoxic hypoxia.
The most hazardous compound is hydrogen cyanide, which is a gas and kills by inhalation. For this reason, working with hydrogen cyanide requires wearing an air respirator supplied by an external oxygen source. Hydrogen cyanide is produced by adding acid to a solution containing a cyanide salt. Alkaline solutions of cyanide are safer to use because they do not evolve hydrogen cyanide gas. Hydrogen cyanide may be produced in the combustion of polyurethanes; for this reason, polyurethanes are not recommended for use in domestic and aircraft furniture. Oral ingestion of a small quantity of solid cyanide or a cyanide solution of as little as 200 mg, or exposure to airborne cyanide of 270 ppm, is sufficient to cause death within minutes.
Organic nitriles do not readily release cyanide ions, and so have low toxicities. By contrast, compounds such as trimethylsilyl cyanide readily release HCN or the cyanide ion upon contact with water.
Antidote
Hydroxocobalamin reacts with cyanide to form cyanocobalamin, which can be safely eliminated by the kidneys. This method has the advantage of avoiding the formation of methemoglobin (see below). This antidote kit is sold under the brand name Cyanokit and was approved by the U.S. FDA in 2006.
An older cyanide antidote kit included administration of three substances: amyl nitrite pearls (administered by inhalation), sodium nitrite, and sodium thiosulfate. The goal of the antidote was to generate a large pool of ferric iron () to compete for cyanide with cytochrome a3 (so that cyanide will bind to the antidote rather than the enzyme). The nitrites oxidize hemoglobin to methemoglobin, which competes with cytochrome oxidase for the cyanide ion. Cyanmethemoglobin is formed and the cytochrome oxidase enzyme is restored. The major mechanism to remove the cyanide from the body is by enzymatic conversion to thiocyanate by the mitochondrial enzyme rhodanese. Thiocyanate is a relatively non-toxic molecule and is excreted by the kidneys. To accelerate this detoxification, sodium thiosulfate is administered to provide a sulfur donor for rhodanese, needed in order to produce thiocyanate.
Sensitivity
Minimum risk levels (MRLs) may not protect for delayed health effects or health effects acquired following repeated sublethal exposure, such as hypersensitivity, asthma, or bronchitis. MRLs may be revised after sufficient data accumulates.
Applications
Mining
Cyanide is mainly produced for the mining of silver and gold: It helps dissolve these metals allowing separation from the other solids. In the cyanide process, finely ground high-grade ore is mixed with the cyanide (at a ratio of about 1:500 parts NaCN to ore); low-grade ores are stacked into heaps and sprayed with a cyanide solution (at a ratio of about 1:1000 parts NaCN to ore). The precious metals are complexed by the cyanide anions to form soluble derivatives, e.g., (dicyanoargentate(I)) and (dicyanoaurate(I)). Silver is less "noble" than gold and often occurs as the sulfide, in which case redox is not invoked (no is required). Instead, a displacement reaction occurs:
Ag2S + 4 NaCN + H2O -> 2 Na[Ag(CN)2] + NaSH + NaOH
4 Au + 8 NaCN + O2 + 2 H2O -> 4 Na[Au(CN)2] + 4 NaOH
The "pregnant liquor" containing these ions is separated from the solids, which are discarded to a tailing pond or spent heap, the recoverable gold having been removed. The metal is recovered from the "pregnant solution" by reduction with zinc dust or by adsorption onto activated carbon. This process can result in environmental and health problems. A number of environmental disasters have followed the overflow of tailing ponds at gold mines. Cyanide contamination of waterways has resulted in numerous cases of human and aquatic species mortality.
Aqueous cyanide is hydrolyzed rapidly, especially in sunlight. It can mobilize some heavy metals such as mercury if present. Gold can also be associated with arsenopyrite (FeAsS), which is similar to iron pyrite (fool's gold), wherein half of the sulfur atoms are replaced by arsenic. Gold-containing arsenopyrite ores are similarly reactive toward inorganic cyanide.
Industrial organic chemistry
The second major application of alkali metal cyanides (after mining) is in the production of CN-containing compounds, usually nitriles. Acyl cyanides are produced from acyl chlorides and cyanide. Cyanogen, cyanogen chloride, and the trimer cyanuric chloride are derived from alkali metal cyanides.
Medical uses
The cyanide compound sodium nitroprusside is used mainly in clinical chemistry to measure urine ketone bodies mainly as a follow-up to diabetic patients. On occasion, it is used in emergency medical situations to produce a rapid decrease in blood pressure in humans; it is also used as a vasodilator in vascular research. The cobalt in artificial vitamin B12 contains a cyanide ligand as an artifact of the purification process; this must be removed by the body before the vitamin molecule can be activated for biochemical use. During World War I, a copper cyanide compound was briefly used by Japanese physicians for the treatment of tuberculosis and leprosy.
Illegal fishing and poaching
Cyanides are illegally used to capture live fish near coral reefs for the aquarium and seafood markets. The practice is controversial, dangerous, and damaging but is driven by the lucrative exotic fish market.
Poachers in Africa have been known to use cyanide to poison waterholes, to kill elephants for their ivory.
Pest control
M44 cyanide devices are used in the United States to kill coyotes and other canids. Cyanide is also used for pest control in New Zealand, particularly for possums, an introduced marsupial that threatens the conservation of native species and spreads tuberculosis amongst cattle. Possums can become bait shy but the use of pellets containing the cyanide reduces bait shyness. Cyanide has been known to kill native birds, including the endangered kiwi. Cyanide is also effective for controlling the dama wallaby, another introduced marsupial pest in New Zealand. A licence is required to store, handle and use cyanide in New Zealand.
Cyanides are used as insecticides for fumigating ships. Cyanide salts are used for killing ants, and have in some places been used as rat poison (the less toxic poison arsenic is more common).
Niche uses
Potassium ferrocyanide is used to achieve a blue color on cast bronze sculptures during the final finishing stage of the sculpture. On its own, it will produce a very dark shade of blue and is often mixed with other chemicals to achieve the desired tint and hue. It is applied using a torch and paint brush while wearing the standard safety equipment used for any patina application: rubber gloves, safety glasses, and a respirator. The actual amount of cyanide in the mixture varies according to the recipes used by each foundry.
Cyanide is also used in jewelry-making and certain kinds of photography such as sepia toning.
Although usually thought to be toxic, cyanide and cyanohydrins increase germination in various plant species.
Human poisoning
Deliberate cyanide poisoning of humans has occurred many times throughout history.
Common salts such as sodium cyanide are involatile but water-soluble, so are poisonous by ingestion. Hydrogen cyanide is a gas, making it more indiscriminately dangerous, however it is lighter than air and rapidly disperses up into the atmosphere, which makes it ineffective as a chemical weapon.
Food additive
Because of the high stability of their complexation with iron, ferrocyanides (Sodium ferrocyanide E535, Potassium ferrocyanide E536, and Calcium ferrocyanide E538) do not decompose to lethal levels in the human body and are used in the food industry as, e.g., an anticaking agent in table salt.
Chemical tests for cyanide
Cyanide is quantified by potentiometric titration, a method widely used in gold mining. It can also be determined by titration with silver ion.
Some analyses begin with an air-purge of an acidified boiling solution, sweeping the vapors into a basic absorber solution. The cyanide salt absorbed in the basic solution is then analyzed.
Qualitative tests
Because of the notorious toxicity of cyanide, many methods have been investigated. Benzidine gives a blue coloration in the presence of ferricyanide. Iron(II) sulfate added to a solution of cyanide, such as the filtrate from the sodium fusion test, gives prussian blue. A solution of para-benzoquinone in DMSO reacts with inorganic cyanide to form a cyanophenol, which is fluorescent. Illumination with a UV light gives a green/blue glow if the test is positive.
| Physical sciences | Inorganic compounds | null |
5912 | https://en.wikipedia.org/wiki/Carbonate | Carbonate | A carbonate is a salt of carbonic acid, (), characterized by the presence of the carbonate ion, a polyatomic ion with the formula . The word "carbonate" may also refer to a carbonate ester, an organic compound containing the carbonate group .
The term is also used as a verb, to describe carbonation: the process of raising the concentrations of carbonate and bicarbonate ions in water to produce carbonated water and other carbonated beverageseither by the addition of carbon dioxide gas under pressure or by dissolving carbonate or bicarbonate salts into the water.
In geology and mineralogy, the term "carbonate" can refer both to carbonate minerals and carbonate rock (which is made of chiefly carbonate minerals), and both are dominated by the carbonate ion, . Carbonate minerals are extremely varied and ubiquitous in chemically precipitated sedimentary rock. The most common are calcite or calcium carbonate, , the chief constituent of limestone (as well as the main component of mollusc shells and coral skeletons); dolomite, a calcium-magnesium carbonate ; and siderite, or iron(II) carbonate, , an important iron ore. Sodium carbonate ("soda" or "natron"), , and potassium carbonate ("potash"), , have been used since antiquity for cleaning and preservation, as well as for the manufacture of glass. Carbonates are widely used in industry, such as in iron smelting, as a raw material for Portland cement and lime manufacture, in the composition of ceramic glazes, and more. New applications of alkali metal carbonates include: thermal energy storage, catalysis and electrolyte both in fuel cell technology as well as in electrosynthesis of in aqueous media.
Structure and bonding
The carbonate ion is the simplest oxocarbon anion. It consists of one carbon atom surrounded by three oxygen atoms, in a trigonal planar arrangement, with D3h molecular symmetry. It has a molecular mass of 60.01 g/mol and carries a total formal charge of −2. It is the conjugate base of the hydrogencarbonate (bicarbonate) ion, , which is the conjugate base of , carbonic acid.
The Lewis structure of the carbonate ion has two (long) single bonds to negative oxygen atoms, and one short double bond to a neutral oxygen atom.
This structure is incompatible with the observed symmetry of the ion, which implies that the three bonds are the same length and that the three oxygen atoms are equivalent. As in the case of the isoelectronic nitrate ion, the symmetry can be achieved by a resonance among three structures:
This resonance can be summarized by a model with fractional bonds and delocalized charges:
Chemical properties
Metal carbonates generally decompose on heating, liberating carbon dioxide leaving behind an oxide of the metal. This process is called calcination, after calx, the Latin name of quicklime or calcium oxide, CaO, which is obtained by roasting limestone in a lime kiln:
As illustrated by its affinity for , carbonate is a ligand for many metal cations. Transition metal carbonate and bicarbonate complexes feature metal ions covalently bonded to carbonate in a variety of bonding modes.
Lithium, sodium, potassium, rubidium, caesium, and ammonium carbonates are water-soluble salts, but carbonates of 2+ and 3+ ions are often poorly soluble in water. Of the insoluble metal carbonates, is important because, in the form of scale, it accumulates in and impedes flow through pipes. Hard water is rich in this material, giving rise to the need for infrastructural water softening.
Acidification of carbonates generally liberates carbon dioxide:
Thus, scale can be removed with acid.
In solution the equilibrium between carbonate, bicarbonate, carbon dioxide and carbonic acid is sensitive to pH, temperature, and pressure. Although di- and trivalent carbonates have low solubility, bicarbonate salts are far more soluble. This difference is related to the disparate lattice energies of solids composed of mono- vs dianions, as well as mono- vs dications.
In aqueous solution, carbonate, bicarbonate, carbon dioxide, and carbonic acid participate in a dynamic equilibrium. In strongly basic conditions, the carbonate ion predominates, while in weakly basic conditions, the bicarbonate ion is prevalent. In more acid conditions, aqueous carbon dioxide, , is the main form, which, with water, , is in equilibrium with carbonic acidthe equilibrium lies strongly towards carbon dioxide. Thus sodium carbonate is basic, sodium bicarbonate is weakly basic, while carbon dioxide itself is a weak acid.
Organic carbonates
In organic chemistry a carbonate can also refer to a functional group within a larger molecule that contains a carbon atom bound to three oxygen atoms, one of which is double bonded. These compounds are also known as organocarbonates or carbonate esters, and have the general formula , or . Important organocarbonates include dimethyl carbonate, the cyclic compounds ethylene carbonate and propylene carbonate, and the phosgene replacement, triphosgene.
Buffer
Three reversible reactions control the pH balance of blood and act as a buffer to stabilise it in the range 7.37–7.43:
Exhaled depletes , which in turn consumes , causing the equilibrium of the first reaction to try to restore the level of carbonic acid by reacting bicarbonate with a hydrogen ion, an example of Le Châtelier's principle. The result is to make the blood more alkaline (raise pH). By the same principle, when the pH is too high, the kidneys excrete bicarbonate () into urine as urea via the urea cycle (or Krebs–Henseleit ornithine cycle). By removing the bicarbonate, more is generated from carbonic acid (), which comes from produced by cellular respiration.
Crucially, a similar buffer operates in the oceans. It is a major factor in climate change and the long-term carbon cycle, due to the large number of marine organisms (especially coral) which are made of calcium carbonate. Increased solubility of carbonate through increased temperatures results in lower production of marine calcite and increased concentration of atmospheric carbon dioxide. This, in turn, increases Earth temperature. The amount of available is on a geological scale and substantial quantities may eventually be redissolved into the sea and released to the atmosphere, increasing levels even more.
Carbonate salts
Carbonate overview:
Presence outside Earth
It is generally thought that the presence of carbonates in rock is strong evidence for the presence of liquid water. Recent observations of the planetary nebula NGC 6302 show evidence for carbonates in space, where aqueous alteration similar to that on Earth is unlikely. Other minerals have been proposed which would fit the observations.
Small amounts of carbonate deposits have been found on Mars via spectral imaging and Martian meteorites also contain small amounts. Groundwater may have existed at Gusev and Meridiani Planum.
| Physical sciences | Salts | null |
5914 | https://en.wikipedia.org/wiki/Catalysis | Catalysis | Catalysis () is the increase in rate of a chemical reaction due to an added substance known as a catalyst (). Catalysts are not consumed by the reaction and remain unchanged after it. If the reaction is rapid and the catalyst recycles quickly, very small amounts of catalyst often suffice; mixing, surface area, and temperature are important factors in reaction rate. Catalysts generally react with one or more reactants to form intermediates that subsequently give the final reaction product, in the process of regenerating the catalyst.
The rate increase occurs because the catalyst allows the reaction to occur by an alternative mechanism which may be much faster than the non-catalyzed mechanism. However the non-catalyzed mechanism does remain possible, so that the total rate (catalyzed plus non-catalyzed) can only increase in the presence of the catalyst and never decrease.
Catalysis may be classified as either homogeneous, whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant, or heterogeneous, whose components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category.
Catalysis is ubiquitous in chemical industry of all kinds. Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture.
The term "catalyst" is derived from Greek , kataluein, meaning "loosen" or "untie". The concept of catalysis was invented by chemist Elizabeth Fulhame, based on her novel work in oxidation-reduction experiments.
General principles
Example
An illustrative example is the effect of catalysts to speed the decomposition of hydrogen peroxide into water and oxygen:
2 HO → 2 HO + O
This reaction proceeds because the reaction products are more stable than the starting compound, but this decomposition is so slow that hydrogen peroxide solutions are commercially available. In the presence of a catalyst such as manganese dioxide this reaction proceeds much more rapidly. This effect is readily seen by the effervescence of oxygen. The catalyst is not consumed in the reaction, and may be recovered unchanged and re-used indefinitely. Accordingly, manganese dioxide is said to catalyze this reaction. In living organisms, this reaction is catalyzed by enzymes (proteins that serve as catalysts) such as catalase.
Another example is the effect of catalysts on air pollution and reducing the amount of carbon monoxide. Development of active and selective catalysts for the conversion of carbon monoxide into desirable products is one of the most important roles of catalysts. Using catalysts for hydrogenation of carbon monoxide helps to remove this toxic gas and also attain useful materials.
Units
The SI derived unit for measuring the catalytic activity of a catalyst is the katal, which is quantified in moles per second. The productivity of a catalyst can be described by the turnover number (or TON) and the catalytic activity by the turn over frequency (TOF), which is the TON per time unit. The biochemical equivalent is the enzyme unit. For more information on the efficiency of enzymatic catalysis, see the article on enzymes.
Catalytic reaction mechanisms
In general, chemical reactions occur faster in the presence of a catalyst because the catalyst provides an alternative reaction mechanism (reaction pathway) having a lower activation energy than the non-catalyzed mechanism. In catalyzed mechanisms, the catalyst is regenerated.
As a simple example occurring in the gas phase, the reaction 2 SO2 + O2 → 2 SO3 can be catalyzed by adding nitric oxide. The reaction occurs in two steps:
2NO + O2 → 2NO2 (rate-determining)
NO2 + SO2 → NO + SO3 (fast)
The NO catalyst is regenerated. The overall rate is the rate of the slow step
v=2k1[NO]2[O2].
An example of heterogeneous catalysis is the reaction of oxygen and hydrogen on the surface of titanium dioxide (TiO, or titania) to produce water. Scanning tunneling microscopy showed that the molecules undergo adsorption and dissociation. The dissociated, surface-bound O and H atoms diffuse together. The intermediate reaction states are: HO, HO, then HO and the reaction product (water molecule dimers), after which the water molecule desorbs from the catalyst surface.
Reaction energetics
Catalysts enable pathways that differ from the uncatalyzed reactions. These pathways have lower activation energy. Consequently, more molecular collisions have the energy needed to reach the transition state. Hence, catalysts can enable reactions that would otherwise be blocked or slowed by a kinetic barrier. The catalyst may increase the reaction rate or selectivity, or enable the reaction at lower temperatures. This effect can be illustrated with an energy profile diagram.
In the catalyzed elementary reaction, catalysts do not change the extent of a reaction: they have no effect on the chemical equilibrium of a reaction. The ratio of the forward and the reverse reaction rates is unaffected (see also thermodynamics). The second law of thermodynamics describes why a catalyst does not change the chemical equilibrium of a reaction. Suppose there was such a catalyst that shifted an equilibrium. Introducing the catalyst to the system would result in a reaction to move to the new equilibrium, producing energy. Production of energy is a necessary result since reactions are spontaneous only if Gibbs free energy is produced, and if there is no energy barrier, there is no need for a catalyst. Then, removing the catalyst would also result in a reaction, producing energy; i.e. the addition and its reverse process, removal, would both produce energy. Thus, a catalyst that could change the equilibrium would be a perpetual motion machine, a contradiction to the laws of thermodynamics. Thus, catalysts do not alter the equilibrium constant. (A catalyst can however change the equilibrium concentrations by reacting in a subsequent step. It is then consumed as the reaction proceeds, and thus it is also a reactant. Illustrative is the base-catalyzed hydrolysis of esters, where the produced carboxylic acid immediately reacts with the base catalyst and thus the reaction equilibrium is shifted towards hydrolysis.)
The catalyst stabilizes the transition state more than it stabilizes the starting material. It decreases the kinetic barrier by decreasing the difference in energy between starting material and the transition state. It does not change the energy difference between starting materials and products (thermodynamic barrier), or the available energy (this is provided by the environment as heat or light).
Related concepts
Some so-called catalysts are really precatalysts. Precatalysts convert to catalysts in the reaction. For example, Wilkinson's catalyst RhCl(PPh) loses one triphenylphosphine ligand before entering the true catalytic cycle. Precatalysts are easier to store but are easily activated in situ. Because of this preactivation step, many catalytic reactions involve an induction period.
In cooperative catalysis, chemical species that improve catalytic activity are called cocatalysts or promoters.
In tandem catalysis two or more different catalysts are coupled in a one-pot reaction.
In autocatalysis, the catalyst is a product of the overall reaction, in contrast to all other types of catalysis considered in this article. The simplest example of autocatalysis is a reaction of type A + B → 2 B, in one or in several steps. The overall reaction is just A → B, so that B is a product. But since B is also a reactant, it may be present in the rate equation and affect the reaction rate. As the reaction proceeds, the concentration of B increases and can accelerate the reaction as a catalyst. In effect, the reaction accelerates itself or is autocatalyzed. An example is the hydrolysis of an ester such as aspirin to a carboxylic acid and an alcohol. In the absence of added acid catalysts, the carboxylic acid product catalyzes the hydrolysis.
Switchable catalysis refers to a type of catalysis where the catalyst can be toggled between different ground states possessing distinct reactivity, typically by applying an external stimulus. This ability to reversibly switch the catalyst allows for spatiotemporal control over catalytic activity and selectivity. The external stimuli used to switch the catalyst can include changes in temperature, pH, light, electric fields, or the addition of chemical agents.
A true catalyst can work in tandem with a sacrificial catalyst. The true catalyst is consumed in the elementary reaction and turned into a deactivated form.
The sacrificial catalyst regenerates the true catalyst for another cycle. The sacrificial catalyst is consumed in the reaction, and as such, it is not really a catalyst, but a reagent. For example, osmium tetroxide (OsO4) is a good reagent for dihydroxylation, but it is highly toxic and expensive. In Upjohn dihydroxylation, the sacrificial catalyst N-methylmorpholine N-oxide (NMMO) regenerates OsO4, and only catalytic quantities of OsO4 are needed.
Classification
Catalysis may be classified as either homogeneous or heterogeneous. A homogeneous catalysis is one whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant's molecules. A heterogeneous catalysis is one where the reaction components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category. Similar mechanistic principles apply to heterogeneous, homogeneous, and biocatalysis.
Heterogeneous catalysis
Heterogeneous catalysts act in a different phase than the reactants. Most heterogeneous catalysts are solids that act on substrates in a liquid or gaseous reaction mixture. Important heterogeneous catalysts include zeolites, alumina, higher-order oxides, graphitic carbon, transition metal oxides, metals such as Raney nickel for hydrogenation, and vanadium(V) oxide for oxidation of sulfur dioxide into sulfur trioxide by the contact process.
Diverse mechanisms for reactions on surfaces are known, depending on how the adsorption takes place (Langmuir-Hinshelwood, Eley-Rideal, and Mars-van Krevelen). The total surface area of a solid has an important effect on the reaction rate. The smaller the catalyst particle size, the larger the surface area for a given mass of particles.
A heterogeneous catalyst has active sites, which are the atoms or crystal faces where the substrate actually binds. Active sites are atoms but are often described as a facet (edge, surface, step, etc.) of a solid. Most of the volume but also most of the surface of a heterogeneous catalyst may be catalytically inactive. Finding out the nature of the active site is technically challenging.
For example, the catalyst for the Haber process for the synthesis of ammonia from nitrogen and hydrogen is often described as iron. But detailed studies and many optimizations have led to catalysts that are mixtures of iron-potassium-calcium-aluminum-oxide. The reacting gases adsorb onto active sites on the iron particles. Once physically adsorbed, the reagents partially or wholly dissociate and form new bonds. In this way the particularly strong triple bond in nitrogen is broken, which would be extremely uncommon in the gas phase due to its high activation energy. Thus, the activation energy of the overall reaction is lowered, and the rate of reaction increases. Another place where a heterogeneous catalyst is applied is in the oxidation of sulfur dioxide on vanadium(V) oxide for the production of sulfuric acid. Many heterogeneous catalysts are in fact nanomaterials.
Heterogeneous catalysts are typically "supported", which means that the catalyst is dispersed on a second material that enhances the effectiveness or minimizes its cost. Supports prevent or minimize agglomeration and sintering of small catalyst particles, exposing more surface area, thus catalysts have a higher specific activity (per gram) on support. Sometimes the support is merely a surface on which the catalyst is spread to increase the surface area. More often, the support and the catalyst interact, affecting the catalytic reaction. Supports can also be used in nanoparticle synthesis by providing sites for individual molecules of catalyst to chemically bind. Supports are porous materials with a high surface area, most commonly alumina, zeolites, or various kinds of activated carbon. Specialized supports include silicon dioxide, titanium dioxide, calcium carbonate, and barium sulfate.
Electrocatalysts
In the context of electrochemistry, specifically in fuel cell engineering, various metal-containing catalysts are used to enhance the rates of the half reactions that comprise the fuel cell. One common type of fuel cell electrocatalyst is based upon nanoparticles of platinum that are supported on slightly larger carbon particles. When in contact with one of the electrodes in a fuel cell, this platinum increases the rate of oxygen reduction either to water or to hydroxide or hydrogen peroxide.
Homogeneous catalysis
Homogeneous catalysts function in the same phase as the reactants. Typically homogeneous catalysts are dissolved in a solvent with the substrates. One example of homogeneous catalysis involves the influence of H on the esterification of carboxylic acids, such as the formation of methyl acetate from acetic acid and methanol. High-volume processes requiring a homogeneous catalyst include hydroformylation, hydrosilylation, hydrocyanation. For inorganic chemists, homogeneous catalysis is often synonymous with organometallic catalysts. Many homogeneous catalysts are however not organometallic, illustrated by the use of cobalt salts that catalyze the oxidation of p-xylene to terephthalic acid.
Organocatalysis
Whereas transition metals sometimes attract most of the attention in the study of catalysis, small organic molecules without metals can also exhibit catalytic properties, as is apparent from the fact that many enzymes lack transition metals. Typically, organic catalysts require a higher loading (amount of catalyst per unit amount of reactant, expressed in mol% amount of substance) than transition metal(-ion)-based catalysts, but these catalysts are usually commercially available in bulk, helping to lower costs. In the early 2000s, these organocatalysts were considered "new generation" and are competitive to traditional metal(-ion)-containing catalysts.
Organocatalysts are supposed to operate akin to metal-free enzymes utilizing, e.g., non-covalent interactions such as hydrogen bonding. The discipline organocatalysis is divided into the application of covalent (e.g., proline, DMAP) and non-covalent (e.g., thiourea organocatalysis) organocatalysts referring to the preferred catalyst-substrate binding and interaction, respectively. The Nobel Prize in Chemistry 2021 was awarded jointly to Benjamin List and David W.C. MacMillan "for the development of asymmetric organocatalysis."
Photocatalysts
Photocatalysis is the phenomenon where the catalyst can receive light to generate an excited state that effect redox reactions. Singlet oxygen is usually produced by photocatalysis. Photocatalysts are components of dye-sensitized solar cells.
Enzymes and biocatalysts
In biology, enzymes are protein-based catalysts in metabolism and catabolism. Most biocatalysts are enzymes, but other non-protein-based classes of biomolecules also exhibit catalytic properties including ribozymes, and synthetic deoxyribozymes.
Biocatalysts can be thought of as an intermediate between homogeneous and heterogeneous catalysts, although strictly speaking soluble enzymes are homogeneous catalysts and membrane-bound enzymes are heterogeneous. Several factors affect the activity of enzymes (and other catalysts) including temperature, pH, the concentration of enzymes, substrate, and products. A particularly important reagent in enzymatic reactions is water, which is the product of many bond-forming reactions and a reactant in many bond-breaking processes.
In biocatalysis, enzymes are employed to prepare many commodity chemicals including high-fructose corn syrup and acrylamide.
Some monoclonal antibodies whose binding target is a stable molecule that resembles the transition state of a chemical reaction can function as weak catalysts for that chemical reaction by lowering its activation energy. Such catalytic antibodies are sometimes called "abzymes".
Significance
Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture. In 2005, catalytic processes generated about $900 billion in products worldwide. Catalysis is so pervasive that subareas are not readily classified. Some areas of particular concentration are surveyed below.
Energy processing
Petroleum refining makes intensive use of catalysis for alkylation, catalytic cracking (breaking long-chain hydrocarbons into smaller pieces), naphtha reforming and steam reforming (conversion of hydrocarbons into synthesis gas). Even the exhaust from the burning of fossil fuels is treated via catalysis: Catalytic converters, typically composed of platinum and rhodium, break down some of the more harmful byproducts of automobile exhaust.
2 CO + 2 NO → 2 CO + N
With regard to synthetic fuels, an old but still important process is the Fischer–Tropsch synthesis of hydrocarbons from synthesis gas, which itself is processed via water-gas shift reactions, catalyzed by iron. The Sabatier reaction produces methane from carbon dioxide and hydrogen. Biodiesel and related biofuels require processing via both inorganic and biocatalysts.
Fuel cells rely on catalysts for both the anodic and cathodic reactions.
Catalytic heaters generate flameless heat from a supply of combustible fuel.
Bulk chemicals
Some of the largest-scale chemicals are produced via catalytic oxidation, often using oxygen. Examples include nitric acid (from ammonia), sulfuric acid (from sulfur dioxide to sulfur trioxide by the contact process), terephthalic acid from p-xylene, acrylic acid from propylene or propane and acrylonitrile from propane and ammonia.
The production of ammonia is one of the largest-scale and most energy-intensive processes. In the Haber process nitrogen is combined with hydrogen over an iron oxide catalyst. Methanol is prepared from carbon monoxide or carbon dioxide but using copper-zinc catalysts.
Bulk polymers derived from ethylene and propylene are often prepared using Ziegler–Natta catalyst. Polyesters, polyamides, and isocyanates are derived via acid–base catalysis.
Most carbonylation processes require metal catalysts, examples include the Monsanto acetic acid process and hydroformylation.
Fine chemicals
Many fine chemicals are prepared via catalysis; methods include those of heavy industry as well as more specialized processes that would be prohibitively expensive on a large scale. Examples include the Heck reaction, and Friedel–Crafts reactions. Because most bioactive compounds are chiral, many pharmaceuticals are produced by enantioselective catalysis (catalytic asymmetric synthesis). (R)-1,2-Propandiol, the precursor to the antibacterial levofloxacin, can be synthesized efficiently from hydroxyacetone by using catalysts based on BINAP-ruthenium complexes, in Noyori asymmetric hydrogenation:
Food processing
One of the most obvious applications of catalysis is the hydrogenation (reaction with hydrogen gas) of fats using nickel catalyst to produce margarine. Many other foodstuffs are prepared via biocatalysis (see below).
Environment
Catalysis affects the environment by increasing the efficiency of industrial processes, but catalysis also plays a direct role in the environment. A notable example is the catalytic role of chlorine free radicals in the breakdown of ozone. These radicals are formed by the action of ultraviolet radiation on chlorofluorocarbons (CFCs).
Cl + O → ClO + O
ClO + O → Cl + O
History
The term "catalyst", broadly defined as anything that increases the rate of a process, is derived from Greek καταλύειν, meaning "to annul", or "to untie", or "to pick up". The concept of catalysis was invented by chemist Elizabeth Fulhame and described in a 1794 book, based on her novel work in oxidation–reduction reactions. The first chemical reaction in organic chemistry that knowingly used a catalyst was studied in 1811 by Gottlieb Kirchhoff, who discovered the acid-catalyzed conversion of starch to glucose. The term catalysis was later used by Jöns Jakob Berzelius in 1835 to describe reactions that are accelerated by substances that remain unchanged after the reaction. Fulhame, who predated Berzelius, did work with water as opposed to metals in her reduction experiments. Other 18th century chemists who worked in catalysis were Eilhard Mitscherlich who referred to it as contact processes, and Johann Wolfgang Döbereiner who spoke of contact action. He developed Döbereiner's lamp, a lighter based on hydrogen and a platinum sponge, which became a commercial success in the 1820s that lives on today. Humphry Davy discovered the use of platinum in catalysis. In the 1880s, Wilhelm Ostwald at Leipzig University started a systematic investigation into reactions that were catalyzed by the presence of acids and bases, and found that chemical reactions occur at finite rates and that these rates can be used to determine the strengths of acids and bases. For this work, Ostwald was awarded the 1909 Nobel Prize in Chemistry. Vladimir Ipatieff performed some of the earliest industrial scale reactions, including the discovery and commercialization of oligomerization and the development of catalysts for hydrogenation.
Inhibitors, poisons, and promoters
An added substance that lowers the rate is called a reaction inhibitor if reversible and catalyst poisons if irreversible. Promoters are substances that increase the catalytic activity, even though they are not catalysts by themselves.
Inhibitors are sometimes referred to as "negative catalysts" since they decrease the reaction rate. However the term inhibitor is preferred since they do not work by introducing a reaction path with higher activation energy; this would not lower the rate since the reaction would continue to occur by the non-catalyzed path. Instead, they act either by deactivating catalysts or by removing reaction intermediates such as free radicals. In heterogeneous catalysis, coking inhibits the catalyst, which becomes covered by polymeric side products.
The inhibitor may modify selectivity in addition to rate. For instance, in the hydrogenation of alkynes to alkenes, a palladium (Pd) catalyst partly "poisoned" with lead(II) acetate (Pb(CHCO)) can be used (Lindlar catalyst). Without the deactivation of the catalyst, the alkene produced would be further hydrogenated to alkane.
The inhibitor can produce this effect by, e.g., selectively poisoning only certain types of active sites. Another mechanism is the modification of surface geometry. For instance, in hydrogenation operations, large planes of metal surface function as sites of hydrogenolysis catalysis while sites catalyzing hydrogenation of unsaturates are smaller. Thus, a poison that covers the surface randomly will tend to lower the number of uncontaminated large planes but leave proportionally smaller sites free, thus changing the hydrogenation vs. hydrogenolysis selectivity. Many other mechanisms are also possible.
Promoters can cover up the surface to prevent the production of a mat of coke, or even actively remove such material (e.g., rhenium on platinum in platforming). They can aid the dispersion of the catalytic material or bind to reagents.
| Physical sciences | Chemistry | null |
5916 | https://en.wikipedia.org/wiki/Circumference | Circumference | In geometry, the circumference (from Latin circumferens, meaning "carrying around") is the perimeter of a circle or ellipse. The circumference is the arc length of the circle, as if it were opened up and straightened out to a line segment. More generally, the perimeter is the curve length around any closed figure.
Circumference may also refer to the circle itself, that is, the locus corresponding to the edge of a disk.
The is the circumference, or length, of any one of its great circles.
Circle
The circumference of a circle is the distance around it, but if, as in many elementary treatments, distance is defined in terms of straight lines, this cannot be used as a definition. Under these circumstances, the circumference of a circle may be defined as the limit of the perimeters of inscribed regular polygons as the number of sides increases without bound. The term circumference is used when measuring physical objects, as well as when considering abstract geometric forms.
Relationship with
The circumference of a circle is related to one of the most important mathematical constants. This constant, pi, is represented by the Greek letter Its first few decimal digits are 3.141592653589793... Pi is defined as the ratio of a circle's circumference to its diameter
Or, equivalently, as the ratio of the circumference to twice the radius. The above formula can be rearranged to solve for the circumference:
The ratio of the circle's circumference to its radius is equivalent to . This is also the number of radians in one turn. The use of the mathematical constant is ubiquitous in mathematics, engineering, and science.
In Measurement of a Circle written circa 250 BCE, Archimedes showed that this ratio (written as since he did not use the name ) was greater than 3 but less than 3 by calculating the perimeters of an inscribed and a circumscribed regular polygon of 96 sides. This method for approximating was used for centuries, obtaining more accuracy by using polygons of larger and larger number of sides. The last such calculation was performed in 1630 by Christoph Grienberger who used polygons with 1040 sides.
Ellipse
Some authors use circumference to denote the perimeter of an ellipse. There is no general formula for the circumference of an ellipse in terms of the semi-major and semi-minor axes of the ellipse that uses only elementary functions. However, there are approximate formulas in terms of these parameters. One such approximation, due to Euler (1773), for the canonical ellipse,
is
Some lower and upper bounds on the circumference of the canonical ellipse with are:
Here the upper bound is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and minor axes.
The circumference of an ellipse can be expressed exactly in terms of the complete elliptic integral of the second kind. More precisely,
where is the length of the semi-major axis and is the eccentricity
| Mathematics | Two-dimensional space | null |
5918 | https://en.wikipedia.org/wiki/Continuum%20mechanics | Continuum mechanics | Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles.
Continuum mechanics deals with deformable bodies, as opposed to rigid bodies.
A continuum model assumes that the substance of the object completely fills the space it occupies. While ignoring the fact that matter is made of atoms, this provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships.
Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics.
Concept of a continuum
The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body that can be continually sub-divided into infinitesimal elements with local material properties defined at any particular point. Properties of the bulk material can therefore be described by continuous functions, and their evolution can be studied using the mathematics of calculus.
Apart from the assumption of continuity, two other independent assumptions are often employed in the study of continuum mechanics. These are homogeneity (assumption of identical properties at all locations) and isotropy (assumption of directionally invariant vector properties). If these auxiliary assumptions are not globally applicable, the material may be segregated into sections where they are applicable in order to simplify the analysis. For more complex cases, one or both of these assumptions can be dropped. In these cases, computational methods are often used to solve the differential equations describing the evolution of material properties.
Major areas
An additional area of continuum mechanics comprises elastomeric foams, which exhibit a curious hyperbolic stress-strain relationship. The elastomer is a true continuum, but a homogeneous distribution of voids gives it unusual properties.
Formulation of models
Continuum mechanics models begin by assigning a region in three-dimensional Euclidean space to the material body being modeled. The points within this region are called particles or material points. Different configurations or states of the body correspond to different regions in Euclidean space. The region corresponding to the body's configuration at time is labeled .
A particular particle within the body in a particular configuration is characterized by a position vector
where are the coordinate vectors in some frame of reference chosen for the problem (See figure 1). This vector can be expressed as a function of the particle position in some reference configuration, for example the configuration at the initial time, so that
This function needs to have various properties so that the model makes physical sense. needs to be:
continuous in time, so that the body changes in a way which is realistic,
globally invertible at all times, so that the body cannot intersect itself,
orientation-preserving, as transformations which produce mirror reflections are not possible in nature.
For the mathematical formulation of the model, is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated.
Forces in a continuum
A solid is a deformable body that possesses shear strength, sc. a solid can support shear forces (forces parallel to the material surface on which they act). Fluids, on the other hand, do not sustain shear forces.
Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as:
Surface forces
Surface forces or contact forces, expressed as force per unit area, can act either on the bounding surface of the body, as a result of mechanical contact with other bodies, or on imaginary internal surfaces that bound portions of the body, as a result of the mechanical interaction between the parts of the body to either side of the surface (Euler-Cauchy's stress principle). When a body is acted upon by external contact forces, internal contact forces are then transmitted from point to point inside the body to balance their action, according to Newton's third law of motion of conservation of linear momentum and angular momentum (for continuous bodies these laws are called the Euler's equations of motion). The internal contact forces are related to the body's deformation through constitutive equations. The internal contact forces may be mathematically described by how they relate to the motion of the body, independent of the body's material makeup.
The distribution of internal contact forces throughout the volume of the body is assumed to be continuous. Therefore, there exists a contact force density or Cauchy traction field that represents this distribution in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector .
Any differential area with normal vector of a given internal surface area , bounding a portion of the body, experiences a contact force arising from the contact between both portions of the body on each side of , and it is given by
where is the surface traction, also called stress vector, traction, or traction vector. The stress vector is a frame-indifferent vector (see Euler-Cauchy's stress principle).
The total contact force on the particular internal surface is then expressed as the sum (surface integral) of the contact forces on all differential surfaces :
In continuum mechanics a body is considered stress-free if the only forces present are those inter-atomic forces (ionic, metallic, and van der Waals forces) required to hold the body together and to keep its shape in the absence of all external influences, including gravitational attraction. Stresses generated during manufacture of the body to a specific configuration are also excluded when considering stresses in a body. Therefore, the stresses considered in continuum mechanics are only those produced by deformation of the body, sc. only relative changes in stress are considered, not the absolute values of stress.
Body forces
Body forces are forces originating from sources outside of the body that act on the volume (or mass) of the body. Saying that body forces are due to outside sources implies that the interaction between different parts of the body (internal forces) are manifested through the contact forces alone. These forces arise from the presence of the body in force fields, e.g. gravitational field (gravitational forces) or electromagnetic field (electromagnetic forces), or from inertial forces when bodies are in motion. As the mass of a continuous body is assumed to be continuously distributed, any force originating from the mass is also continuously distributed. Thus, body forces are specified by vector fields which are assumed to be continuous over the entire volume of the body, i.e. acting on every point in it. Body forces are represented by a body force density (per unit of mass), which is a frame-indifferent vector field.
In the case of gravitational forces, the intensity of the force depends on, or is proportional to, the mass density of the material, and it is specified in terms of force per unit mass () or per unit volume (). These two specifications are related through the material density by the equation . Similarly, the intensity of electromagnetic forces depends upon the strength (electric charge) of the electromagnetic field.
The total body force applied to a continuous body is expressed as
Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque about the origin is given by
In certain situations, not commonly considered in the analysis of the mechanical behavior of materials, it becomes necessary to include two other types of forces: these are couple stresses (surface couples, contact torques) and body moments. Couple stresses are moments per unit area applied on a surface. Body moments, or body couples, are moments per unit volume or per unit mass applied to the volume of the body. Both are important in the analysis of stress for a polarized dielectric solid under the action of an electric field, materials where the molecular structure is taken into consideration (e.g. bones), solids under the action of an external magnetic field, and the dislocation theory of metals.
Materials that exhibit body couples and couple stresses in addition to moments produced exclusively by forces are called polar materials. Non-polar materials are then those materials with only moments of forces. In the classical branches of continuum mechanics the development of the theory of stresses is based on non-polar materials.
Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given by
Kinematics: motion and deformation
A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 2).
The motion of a continuum body is a continuous time sequence of displacements. Thus, the material body will occupy different configurations at different times so that a particle occupies a series of points in space which describe a path line.
There is continuity during motion or deformation of a continuum body in the sense that:
The material points forming a closed curve at any instant will always form a closed curve at any subsequent time.
The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within.
It is convenient to identify a reference configuration or initial condition which all subsequent configurations are referenced from. The reference configuration need not be one that the body will ever occupy. Often, the configuration at is considered the reference configuration, . The components of the position vector of a particle, taken with respect to the reference configuration, are called the material or reference coordinates.
When analyzing the motion or deformation of solids, or the flow of fluids, it is necessary to describe the sequence or evolution of configurations throughout time. One description for motion is made in terms of the material or referential coordinates, called material description or Lagrangian description.
Lagrangian description
In the Lagrangian description the position and physical properties of the particles are described in terms of the material or referential coordinates and time. In this case the reference configuration is the configuration at . An observer standing in the frame of reference observes the changes in the position and physical properties as the material body moves in space as time progresses. The results obtained are independent of the choice of initial time and reference configuration, . This description is normally used in solid mechanics.
In the Lagrangian description, the motion of a continuum body is expressed by the mapping function (Figure 2),
which is a mapping of the initial configuration onto the current configuration , giving a geometrical correspondence between them, i.e. giving the position vector that a particle , with a position vector in the undeformed or reference configuration , will occupy in the current or deformed configuration at time . The components are called the spatial coordinates.
Physical and kinematic properties , i.e. thermodynamic properties and flow velocity, which describe or characterize features of the material body, are expressed as continuous functions of position and time, i.e. .
The material derivative of any property of a continuum, which may be a scalar, vector, or tensor, is the time rate of change of that property for a specific group of particles of the moving continuum body. The material derivative is also known as the substantial derivative, or comoving derivative, or convective derivative. It can be thought as the rate at which the property changes when measured by an observer traveling with that group of particles.
In the Lagrangian description, the material derivative of is simply the partial derivative with respect to time, and the position vector is held constant as it does not change with time. Thus, we have
The instantaneous position is a property of a particle, and its material derivative is the instantaneous flow velocity of the particle. Therefore, the flow velocity field of the continuum is given by
Similarly, the acceleration field is given by
Continuity in the Lagrangian description is expressed by the spatial and temporal continuity of the mapping from the reference configuration to the current configuration of the material points. All physical quantities characterizing the continuum are described this way. In this sense, the function and are single-valued and continuous, with continuous derivatives with respect to space and time to whatever order is required, usually to the second or third.
Eulerian description
Continuity allows for the inverse of to trace backwards where the particle currently located at was located in the initial or referenced configuration . In this case the description of motion is made in terms of the spatial coordinates, in which case is called the spatial description or Eulerian description, i.e. the current configuration is taken as the reference configuration.
The Eulerian description, introduced by d'Alembert, focuses on the current configuration , giving attention to what is occurring at a fixed point in space as time progresses, instead of giving attention to individual particles as they move through space and time. This approach is conveniently applied in the study of fluid flow where the kinematic property of greatest interest is the rate at which change is taking place rather than the shape of the body of fluid at a reference time.
Mathematically, the motion of a continuum using the Eulerian description is expressed by the mapping function
which provides a tracing of the particle which now occupies the position in the current configuration to its original position in the initial configuration .
A necessary and sufficient condition for this inverse function to exist is that the determinant of the Jacobian matrix, often referred to simply as the Jacobian, should be different from zero. Thus,
In the Eulerian description, the physical properties are expressed as
where the functional form of in the Lagrangian description is not the same as the form of in the Eulerian description.
The material derivative of , using the chain rule, is then
The first term on the right-hand side of this equation gives the local rate of change of the property occurring at position . The second term of the right-hand side is the convective rate of change and expresses the contribution of the particle changing position in space (motion).
Continuity in the Eulerian description is expressed by the spatial and temporal continuity and continuous differentiability of the flow velocity field. All physical quantities are defined this way at each instant of time, in the current configuration, as a function of the vector position .
Displacement field
The vector joining the positions of a particle in the undeformed configuration and deformed configuration is called the displacement vector , in the Lagrangian description, or , in the Eulerian description.
A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as
or in terms of the spatial coordinates as
where are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus
and the relationship between and is then given by
Knowing that
then
It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in , and the direction cosines become Kronecker deltas, i.e.
Thus, we have
or in terms of the spatial coordinates as
Governing equations
Continuum mechanics deals with the behavior of materials that can be approximated as continuous for certain length and time scales. The equations that govern the mechanics of such materials include the balance laws for mass, momentum, and energy. Kinematic relations and constitutive equations are needed to complete the system of governing equations. Physical restrictions on the form of the constitutive relations can be applied by requiring that the second law of thermodynamics be satisfied under all conditions. In the continuum mechanics of solids, the second law of thermodynamics is satisfied if the Clausius–Duhem form of the entropy inequality is satisfied.
The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes:
the physical quantity itself flows through the surface that bounds the volume,
there is a source of the physical quantity on the surface of the volume, or/and,
there is a source of the physical quantity inside the volume.
Let be the body (an open subset of Euclidean space) and let be its surface (the boundary of ).
Let the motion of material points in the body be described by the map
where is the position of a point in the initial configuration and is the location of the same point in the deformed configuration.
The deformation gradient is given by
Balance laws
Let be a physical quantity that is flowing through the body. Let be sources on the surface of the body and let be sources inside the body. Let be the outward unit normal to the surface . Let be the flow velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface is moving be (in the direction ).
Then, balance laws can be expressed in the general form
The functions , , and can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. If there are internal boundaries in the body, jump discontinuities also need to be specified in the balance laws.
If we take the Eulerian point of view, it can be shown that the balance laws of mass, momentum, and energy for a solid can be written as (assuming the source term is zero for the mass and angular momentum equations)
In the above equations is the mass density (current), is the material time derivative of , is the particle velocity, is the material time derivative of , is the Cauchy stress tensor, is the body force density, is the internal energy per unit mass, is the material time derivative of , is the heat flux vector, and is an energy source per unit mass. The operators used are defined below.
With respect to the reference configuration (the Lagrangian point of view), the balance laws can be written as
In the above, is the first Piola-Kirchhoff stress tensor, and is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by
We can alternatively define the nominal stress tensor which is the transpose of the first Piola-Kirchhoff stress tensor such that
Then the balance laws become
Operators
The operators in the above equations are defined as
where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the current configuration. Also,
where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the reference configuration.
The inner product is defined as
Clausius–Duhem inequality
The Clausius–Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved.
Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, an internal mass density and an internal specific entropy (i.e. entropy per unit mass) in the region of interest.
Let be such a region and let be its boundary. Then the second law of thermodynamics states that the rate of increase of in this region is greater than or equal to the sum of that supplied to (as a flux or from internal sources) and the change of the internal entropy density due to material flowing in and out of the region.
Let move with a flow velocity and let particles inside have velocities . Let be the unit outward normal to the surface . Let be the density of matter in the region, be the entropy flux at the surface, and be the entropy source per unit mass.
Then the entropy inequality may be written as
The scalar entropy flux can be related to the vector flux at the surface by the relation . Under the assumption of incrementally isothermal conditions, we have
where is the heat flux vector, is an energy source per unit mass, and is the absolute temperature of a material point at at time .
We then have the Clausius–Duhem inequality in integral form:
We can show that the entropy inequality may be written in differential form as
In terms of the Cauchy stress and the internal energy, the Clausius–Duhem inequality may be written as
Validity
The validity of the continuum assumption may be verified by a theoretical analysis, in which either some clear periodicity is identified or statistical homogeneity and ergodicity of the microstructure exist. More specifically, the continuum hypothesis hinges on the concepts of a representative elementary volume and separation of scales based on the Hill–Mandel condition. This condition provides a link between an experimentalist's and a theoretician's viewpoint on constitutive equations (linear and nonlinear elastic/inelastic or coupled fields) as well as a way of spatial and statistical averaging of the microstructure.
When the separation of scales does not hold, or when one wants to establish a continuum of a finer resolution than the size of the representative volume element (RVE), a statistical volume element (SVE) is employed, which results in random continuum fields. The latter then provide a micromechanics basis for stochastic finite elements (SFE). The levels of SVE and RVE link continuum mechanics to statistical mechanics. Experimentally, the RVE can only be evaluated when the constitutive response is spatially homogenous.
Applications
Continuum mechanics
Solid mechanics
Fluid mechanics
Engineering
Civil engineering
Mechanical engineering
Aerospace engineering
Biomedical engineering
Chemical engineering
| Physical sciences | Basics_10 | null |
5921 | https://en.wikipedia.org/wiki/Color | Color | Color (or colour in Commonwealth English; see spelling differences) is the visual perception based on the electromagnetic spectrum. Though color is not an inherent property of matter, color perception is related to an object's light absorption, reflection, emission spectra, and interference. For most humans, colors are perceived in the visible light spectrum with three types of cone cells (trichromacy). Other animals may have a different number of cone cell types or have eyes sensitive to different wavelengths, such as bees that can distinguish ultraviolet, and thus have a different color sensitivity range. Animal perception of color originates from different light wavelength or spectral sensitivity in cone cell types, which is then processed by the brain.
Colors have perceived properties such as hue, colorfulness (saturation), and luminance. Colors can also be additively mixed (commonly used for actual light) or subtractively mixed (commonly used for materials). If the colors are mixed in the right proportions, because of metamerism, they may look the same as a single-wavelength light. For convenience, colors can be organized in a color space, which when being abstracted as a mathematical color model can assign each region of color with a corresponding set of numbers. As such, color spaces are an essential tool for color reproduction in print, photography, computer monitors, and television. Some of the most well-known color models and color spaces are RGB, CMYK, HSL/HSV, CIE Lab, and YCbCr/YUV.
Because the perception of color is an important aspect of human life, different colors have been associated with emotions, activity, and nationality. Names of color regions in different cultures can have different, sometimes overlapping areas. In visual arts, color theory is used to govern the use of colors in an aesthetically pleasing and harmonious way. The theory of color includes the color complements; color balance; and classification of primary colors (traditionally red, yellow, blue), secondary colors (traditionally orange, green, purple), and tertiary colors. The study of colors in general is called color science.
Physical properties
Electromagnetic radiation is characterized by its wavelength (or frequency) and its intensity. When the wavelength is within the visible spectrum (the range of wavelengths humans can perceive, approximately from 390 nm to 700 nm), it is known as "visible light".
Most light sources emit light at many different wavelengths; a source's spectrum is a distribution giving its intensity at each wavelength. Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary widely among different species, and to a lesser extent among individuals within the same species. In each such class, the members are called metamers of the color in question. This effect can be visualized by comparing the light sources' spectral power distributions and the resulting colors.
Spectral colors
The familiar colors of the rainbow in the spectrum—named using the Latin word for appearance or apparition by Isaac Newton in 1671—include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The spectrum above shows approximate wavelengths (in nm) for spectral colors in the visible range. Spectral colors have 100% purity, and are fully saturated. A complex mixture of spectral colors can be used to describe any color, which is the definition of a light power spectrum.
The spectral colors form a continuous spectrum, and how it is divided into distinct colors linguistically is a matter of culture and historical contingency. Despite the ubiquitous ROYGBIV mnemonic used to remember the spectral colors in English, the inclusion or exclusion of colors is contentious, with disagreement often focused on indigo and cyan. Even if the subset of color terms is agreed, their wavelength ranges and borders between them may not be.
The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably. For example, a low-intensity orange-yellow is brown, and a low-intensity yellow-green is olive green. Additionally, hue shifts towards yellow or blue happen if the intensity of a spectral light is increased; this is called Bezold–Brücke shift. In color models capable of representing spectral colors, such as CIELUV, a spectral color has the maximal saturation. In Helmholtz coordinates, this is described as 100% purity.
Color of objects
The physical color of an object depends on how it absorbs and scatters light. Most objects scatter light to some degree and do not reflect or transmit light specularly like glasses or mirrors. A transparent object allows almost all light to transmit or pass through, thus transparent objects are perceived as colorless. Conversely, an opaque object does not allow light to transmit through and instead absorbs or reflects the light it receives. Like transparent objects, translucent objects allow light to transmit through, but translucent objects are seen colored because they scatter or absorb certain wavelengths of light via internal scattering. The absorbed light is often dissipated as heat.
Color vision
Development of theories of color vision
Although Aristotle and other ancient scientists had already written on the nature of light and color vision, it was not until Newton that light was identified as the source of the color sensation. In 1810, Goethe published his comprehensive Theory of Colors in which he provided a rational description of color experience, which 'tells us how it originates, not what it is'. (Schopenhauer)
In 1801 Thomas Young proposed his trichromatic theory, based on the observation that any color could be matched with a combination of three lights. This theory was later refined by James Clerk Maxwell and Hermann von Helmholtz. As Helmholtz puts it, "the principles of Newton's law of mixture were experimentally confirmed by Maxwell in 1856. Young's theory of color sensations, like so much else that this marvelous investigator achieved in advance of his time, remained unnoticed until Maxwell directed attention to it."
At the same time as Helmholtz, Ewald Hering developed the opponent process theory of color, noting that color blindness and afterimages typically come in opponent pairs (red-green, blue-orange, yellow-violet, and black-white). Ultimately these two theories were synthesized in 1957 by Hurvich and Jameson, who showed that retinal processing corresponds to the trichromatic theory, while processing at the level of the lateral geniculate nucleus corresponds to the opponent theory.
In 1931, an international group of experts known as the Commission internationale de l'éclairage (CIE) developed a mathematical color model, which mapped out the space of observable colors and assigned a set of three numbers to each.
Color in the eye
The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans are trichromatic—the retina contains three types of color receptor cells, or cones. One type, relatively distinct from the other two, is most responsive to light that is perceived as blue or blue-violet, with wavelengths around 450 nm; cones of this type are sometimes called short-wavelength cones or S cones (or misleadingly, blue cones). The other two types are closely related genetically and chemically: middle-wavelength cones, M cones, or green cones are most sensitive to light perceived as green, with wavelengths around 540 nm, while the long-wavelength cones, L cones, or red cones, are most sensitive to light that is perceived as greenish yellow, with wavelengths around 570 nm.
Light, no matter how complex its composition of wavelengths, is reduced to three color components by the eye. Each cone type adheres to the principle of univariance, which is that each cone's output is determined by the amount of light that falls on it over all wavelengths. For each location in the visual field, the three types of cones yield three signals based on the extent to which each is stimulated. These amounts of stimulation are sometimes called tristimulus values.
The response curve as a function of wavelength varies for each type of cone. Because the curves overlap, some tristimulus values do not occur for any incoming light combination. For example, it is not possible to stimulate only the mid-wavelength (so-called "green") cones; the other cones will inevitably be stimulated to some degree at the same time. The set of all possible tristimulus values determines the human color space. It has been estimated that humans can distinguish roughly 10 million different colors.
The other type of light-sensitive cell in the eye, the rod, has a different response curve. In normal situations, when light is bright enough to strongly stimulate the cones, rods play virtually no role in vision at all. On the other hand, in dim light, the cones are understimulated leaving only the signal from the rods, resulting in a colorless response (furthermore, the rods are barely sensitive to light in the "red" range). In certain conditions of intermediate illumination, the rod response and a weak cone response can together result in color discriminations not accounted for by cone responses alone. These effects, combined, are summarized also in the Kruithof curve, which describes the change of color perception and pleasingness of light as a function of temperature and intensity.
Color in the brain
While the mechanisms of color vision at the level of the retina are well-described in terms of tristimulus values, color processing after that point is organized differently. A dominant theory of color vision proposes that color information is transmitted out of the eye by three opponent processes, or opponent channels, each constructed from the raw output of the cones: a red–green channel, a blue–yellow channel, and a black–white "luminance" channel. This theory has been supported by neurobiology, and accounts for the structure of our subjective color experience. Specifically, it explains why humans cannot perceive a "reddish green" or "yellowish blue", and it predicts the color wheel: it is the collection of colors for which at least one of the two color channels measures a value at one of its extremes.
The exact nature of color perception beyond the processing already described, and indeed the status of color as a feature of the perceived world or rather as a feature of our perception of the world—a type of qualia—is a matter of complex and continuing philosophical dispute.
From the V1 blobs, color information is sent to cells in the second visual area, V2. The cells in V2 that are most strongly color tuned are clustered in the "thin stripes" that, like the blobs in V1, stain for the enzyme cytochrome oxidase (separating the thin stripes are interstripes and thick stripes, which seem to be concerned with other visual information like motion and high-resolution form). Neurons in V2 then synapse onto cells in the extended V4. This area includes not only V4, but two other areas in the posterior inferior temporal cortex, anterior to area V3, the dorsal posterior inferior temporal cortex, and posterior TEO. Area V4 was initially suggested by Semir Zeki to be exclusively dedicated to color, and he later showed that V4 can be subdivided into subregions with very high concentrations of color cells separated from each other by zones with lower concentration of such cells though even the latter cells respond better to some wavelengths than to others, a finding confirmed by subsequent studies. The presence in V4 of orientation-selective cells led to the view that V4 is involved in processing both color and form associated with color but it is worth noting that the orientation selective cells within V4 are more broadly tuned than their counterparts in V1, V2, and V3. Color processing in the extended V4 occurs in millimeter-sized color modules called globs. This is the part of the brain in which color is first processed into the full range of hues found in color space.
Nonstandard color perception
Color vision deficiency
A color vision deficiency causes an individual to perceive a smaller gamut of colors than the standard observer with normal color vision. The effect can be mild, having lower "color resolution" (i.e. anomalous trichromacy), moderate, lacking an entire dimension or channel of color (e.g. dichromacy), or complete, lacking all color perception (i.e. monochromacy). Most forms of color blindness derive from one or more of the three classes of cone cells either being missing, having a shifted spectral sensitivity or having lower responsiveness to incoming light. In addition, cerebral achromatopsia is caused by neural anomalies in those parts of the brain where visual processing takes place.
Some colors that appear distinct to an individual with normal color vision will appear metameric to the color blind. The most common form of color blindness is congenital red–green color blindness, affecting ~8% of males. Individuals with the strongest form of this condition (dichromacy) will experience blue and purple, green and yellow, teal, and gray as colors of confusion, i.e. metamers.
Tetrachromacy
Outside of humans, which are mostly trichromatic (having three types of cones), most mammals are dichromatic, possessing only two cones. However, outside of mammals, most vertebrates are tetrachromatic, having four types of cones. This includes most birds, reptiles, amphibians, and bony fish. An extra dimension of color vision means these vertebrates can see two distinct colors that a normal human would view as metamers. Some invertebrates, such as the mantis shrimp, have an even higher number of cones (12) that could lead to a richer color gamut than even imaginable by humans.
The existence of human tetrachromats is a contentious notion. As many as half of all human females have 4 distinct cone classes, which could enable tetrachromacy. However, a distinction must be made between retinal (or weak) tetrachromats, which express four cone classes in the retina, and functional (or strong) tetrachromats, which are able to make the enhanced color discriminations expected of tetrachromats. In fact, there is only one peer-reviewed report of a functional tetrachromat. It is estimated that while the average person is able to see one million colors, someone with functional tetrachromacy could see a hundred million colors.
Synesthesia
In certain forms of synesthesia, perceiving letters and numbers (grapheme–color synesthesia) or hearing sounds (chromesthesia) will evoke a perception of color. Behavioral and functional neuroimaging experiments have demonstrated that these color experiences lead to changes in behavioral tasks and lead to increased activation of brain regions involved in color perception, thus demonstrating their reality, and similarity to real color percepts, albeit evoked through a non-standard route. Synesthesia can occur genetically, with 4% of the population having variants associated with the condition. Synesthesia has also been known to occur with brain damage, drugs, and sensory deprivation.
The philosopher Pythagoras experienced synesthesia and provided one of the first written accounts of the condition in approximately 550 BCE. He created mathematical equations for musical notes that could form part of a scale, such as an octave.
Afterimages
After exposure to strong light in their sensitivity range, photoreceptors of a given type become desensitized. For a few seconds after the light ceases, they will continue to signal less strongly than they otherwise would. Colors observed during that period will appear to lack the color component detected by the desensitized photoreceptors. This effect is responsible for the phenomenon of afterimages, in which the eye may continue to see a bright figure after looking away from it, but in a complementary color. Afterimage effects have also been used by artists, including Vincent van Gogh.
Color constancy
When an artist uses a limited color palette, the human visual system tends to compensate by seeing any gray or neutral color as the color which is missing from the color wheel. For example, in a limited palette consisting of red, yellow, black, and white, a mixture of yellow and black will appear as a variety of green, a mixture of red and black will appear as a variety of purple, and pure gray will appear bluish.
The trichromatic theory is strictly true when the visual system is in a fixed state of adaptation. In reality, the visual system is constantly adapting to changes in the environment and compares the various colors in a scene to reduce the effects of the illumination. If a scene is illuminated with one light, and then with another, as long as the difference between the light sources stays within a reasonable range, the colors in the scene appear relatively constant to us. This was studied by Edwin H. Land in the 1970s and led to his retinex theory of color constancy.
Both phenomena are readily explained and mathematically modeled with modern theories of chromatic adaptation and color appearance (e.g. CIECAM02, iCAM). There is no need to dismiss the trichromatic theory of vision, but rather it can be enhanced with an understanding of how the visual system adapts to changes in the viewing environment.
Reproduction
Color reproduction is the science of creating colors for the human eye that faithfully represent the desired color. It focuses on how to construct a spectrum of wavelengths that will best evoke a certain color in an observer. Most colors are not spectral colors, meaning they are mixtures of various wavelengths of light. However, these non-spectral colors are often described by their dominant wavelength, which identifies the single wavelength of light that produces a sensation most similar to the non-spectral color. Dominant wavelength is roughly akin to hue.
There are many color perceptions that by definition cannot be pure spectral colors due to desaturation or because they are purples (mixtures of red and violet light, from opposite ends of the spectrum). Some examples of necessarily non-spectral colors are the achromatic colors (black, gray, and white) and colors such as pink, tan, and magenta.
Two different light spectra that have the same effect on the three color receptors in the human eye will be perceived as the same color. They are metamers of that color. This is exemplified by the white light emitted by fluorescent lamps, which typically has a spectrum of a few narrow bands, while daylight has a continuous spectrum. The human eye cannot tell the difference between such light spectra just by looking into the light source, although the color rendering index of each light source may affect the color of objects illuminated by these metameric light sources.
Similarly, most human color perceptions can be generated by a mixture of three colors called primaries. This is used to reproduce color scenes in photography, printing, television, and other media. There are a number of methods or color spaces for specifying a color in terms of three particular primary colors. Each method has its advantages and disadvantages depending on the particular application.
No mixture of colors, however, can produce a response truly identical to that of a spectral color, although one can get close, especially for the longer wavelengths, where the CIE 1931 color space chromaticity diagram has a nearly straight edge. For example, mixing green light (530 nm) and blue light (460 nm) produces cyan light that is slightly desaturated, because response of the red color receptor would be greater to the green and blue light in the mixture than it would be to a pure cyan light at 485 nm that has the same intensity as the mixture of blue and green.
Because of this, and because the primaries in color printing systems generally are not pure themselves, the colors reproduced are never perfectly saturated spectral colors, and so spectral colors cannot be matched exactly. However, natural scenes rarely contain fully saturated colors, thus such scenes can usually be approximated well by these systems. The range of colors that can be reproduced with a given color reproduction system is called the gamut. The CIE chromaticity diagram can be used to describe the gamut.
Another problem with color reproduction systems is connected with the initial measurement of color, or colorimetry. The characteristics of the color sensors in measurement devices (e.g. cameras, scanners) are often very far from the characteristics of the receptors in the human eye.
A color reproduction system "tuned" to a human with normal color vision may give very inaccurate results for other observers, according to color vision deviations to the standard observer.
The different color response of different devices can be problematic if not properly managed. For color information stored and transferred in digital form, color management techniques, such as those based on ICC profiles, can help to avoid distortions of the reproduced colors. Color management does not circumvent the gamut limitations of particular output devices, but can assist in finding good mapping of input colors into the gamut that can be reproduced.
Additive coloring
Additive color is light created by mixing together light of two or more different colors. Red, green, and blue are the additive primary colors normally used in additive color systems such as projectors, televisions, and computer terminals.
Subtractive coloring
Subtractive coloring uses dyes, inks, pigments, or filters to absorb some wavelengths of light and not others. The color that a surface displays comes from the parts of the visible spectrum that are not absorbed and therefore remain visible. Without pigments or dye, fabric fibers, paint base and paper are usually made of particles that scatter white light (all colors) well in all directions. When a pigment or ink is added, wavelengths are absorbed or "subtracted" from white light, so light of another color reaches the eye.
If the light is not a pure white source (the case of nearly all forms of artificial lighting), the resulting spectrum will appear a slightly different color. Red paint, viewed under blue light, may appear black. Red paint is red because it scatters only the red components of the spectrum. If red paint is illuminated by blue light, it will be absorbed by the red paint, creating the appearance of a black object.
The subtractive model also predicts the color resulting from a mixture of paints, or similar medium such as fabric dye, whether applied in layers or mixed together prior to application. In the case of paint mixed before application, incident light interacts with many different pigment particles at various depths inside the paint layer before emerging.
Structural color
Structural colors are colors caused by interference effects rather than by pigments. Color effects are produced when a material is scored with fine parallel lines, formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the color's wavelength. If the microstructures are spaced randomly, light of shorter wavelengths will be scattered preferentially to produce Tyndall effect colors: the blue of the sky (Rayleigh scattering, caused by structures much smaller than the wavelength of light, in this case, air molecules), the luster of opals, and the blue of human irises. If the microstructures are aligned in arrays, for example, the array of pits in a CD, they behave as a diffraction grating: the grating reflects different wavelengths in different directions due to interference phenomena, separating mixed "white" light into light of different wavelengths. If the structure is one or more thin layers then it will reflect some wavelengths and transmit others, depending on the layers' thickness.
Structural color is studied in the field of thin-film optics. The most ordered or the most changeable structural colors are iridescent. Structural color is responsible for the blues and greens of the feathers of many birds (the blue jay, for example), as well as certain butterfly wings and beetle shells. Variations in the pattern's spacing often give rise to an iridescent effect, as seen in peacock feathers, soap bubbles, films of oil, and mother of pearl, because the reflected color depends upon the viewing angle. Numerous scientists have carried out research in butterfly wings and beetle shells, including Isaac Newton and Robert Hooke. Since 1942, electron micrography has been used, advancing the development of products that exploit structural color, such as "photonic" cosmetics.
Optimal colors
Optimal colors are the most chromatic colors that surfaces can have. That is, optimal colors are the theoretical limit for the color of objects*. For now, we are unable to produce objects with such colors, at least not without recurring to more complex physical phenomena.
*(with classical reflection. Phenomena like fluorescence or structural color may produce objects whose color lies outside the optimal color solid)
The plot of the gamut bounded by optimal colors in a color space is called the optimal color solid or Rösch–MacAdam color solid.
The reflectance spectrum of a color is the amount of light of each wavelength that it reflects, in proportion to a given maximum, which is total reflection of light of that wavelength, and has the value of 1 (100%). If the reflectance spectrum of a color is 0 (0%) or 1 (100%) across the entire visible spectrum, and it has no more than two transitions between 0 and 1, or 1 and 0, then it is an optimal color. With the current state of technology, we are unable to produce any material or pigment with these properties.
Thus four types of "optimal color" spectra are possible:
The transition goes from zero at both ends of the spectrum to one in the middle, as shown in the image at right.
It goes from one at the ends to zero in the middle.
It goes from 1 at the start of the visible spectrum to 0 in some point in the middle until its end.
It goes from 0 at the start of the visible spectrum to 1 at some point in the middle until its end.
The first type produces colors that are similar to the spectral colors and follow roughly the horseshoe-shaped portion of the CIE xy chromaticity diagram (the spectral locus), but are, in surfaces, more chromatic, although less spectrally pure. The second type produces colors that are similar to (but, in surfaces, more chromatic and less spectrally pure than) the colors on the straight line in the CIE xy chromaticity diagram (the line of purples), leading to magenta or purple-like colors. The third type produces the colors located in the "warm" sharp edge of the optimal color solid (this will be explained later in the article). The fourth type produces the colors located in the "cold" sharp edge of the optimal color solid.
In optimal color solids, the colors of the visible spectrum are theoretically black, because their reflectance spectrum is 1 (100%) in only one wavelength, and 0 in all of the other infinite visible wavelengths that there are, meaning that they have a lightness of 0 with respect to white, and will also have 0 chroma, but, of course, 100% of spectral purity. In short: In optimal color solids, spectral colors are equivalent to black (0 lightness, 0 chroma), but have full spectral purity (they are located in the horseshoe-shaped spectral locus of the chromaticiy diagram).
In linear color spaces that contain all colors visible by humans, such as LMS or CIE 1931 XYZ, the set of half-lines that start at the origin (black, (0, 0, 0)) and pass through all the points that represent the colors of the visible spectrum, and the portion of a plane that passes through the violet half-line and the red half-line (both ends of the visible spectrum), generate the "spectrum cone". The black point (coordinates (0, 0, 0)) of the optimal color solid (and only the black point) is tangent to the "spectrum cone", and the white point ((1, 1, 1)) (only the white point) is tangent to the "inverted spectrum cone", with the "inverted spectrum cone" being symmetrical to the "spectrum cone" with respect to the middle gray point ((0.5, 0.5, 0.5)). This means that, in linear color spaces, the optimal color solid is centrally symmetric.
In most color spaces, the surface of the optimal color solid is smooth, except for two points (black and white); and two sharp edges: the "warm" edge, which goes from black, to red, to orange, to yellow, to white; and the "cold" edge, which goes from black, to deep violet, to blue, to cyan, to white. This is due to the following: If the portion of the reflectance spectrum of a color is spectral red (which is located at one end of the spectrum), it will be seen as black. If the size of the portion of total reflectance is increased, now covering from the red end of the spectrum to the yellow wavelengths, it will be seen as red or orange. If the portion is expanded even more, covering some green wavelengths, it will be seen as yellow. If it is expanded even more, it will cover more wavelengths than the yellow semichrome does, approaching white, until it is reached when the full spectrum is reflected. The described process is called "cumulation". Cumulation can be started at either end of the visible spectrum (we just described cumulation starting from the red end of the spectrum, generating the "warm" sharp edge), cumulation starting at the violet end of the spectrum will generate the "cold" sharp edge.
On modern computers, it is possible to calculate an optimal color solid with great precision in seconds. Usually, only the MacAdam limits (the optimal colors, the boundary of the Optimal color solid) are computed, because all the other (non-optimal) colors exist inside the boundary.
Maximum chroma colors, semichromes, or full colors
Each hue has a maximum chroma point, semichrome, or full color; objects cannot have a color of that hue with a higher chroma. They are the most chromatic, vibrant colors that objects can have. They were called semichromes or full colors by the German chemist and philosopher Wilhelm Ostwald in the early 20th century.
If B is the complementary wavelength of wavelength A, then the straight line that connects A and B passes through the achromatic axis in a linear color space, such as LMS or CIE 1931 XYZ. If the reflectance spectrum of a color is 1 (100%) for all the wavelengths between A and B, and 0 for all the wavelengths of the other half of the color space, then that color is a maximum chroma color, semichrome, or full color (this is the explanation to why they were called semichromes). Thus, maximum chroma colors are a type of optimal color.
As explained, full colors are far from being monochromatic (physically, not perceptually). If the spectral purity of a semichrome is increased, its chroma decreases, because it will approach the visible spectrum, ergo, it will approach black.
In perceptually uniform color spaces, the lightness of the full colors varies from around 30% in the violetish blue hues, to around 90% in the yellowish hues. The chroma of each maximum chroma point also varies depending on the hue; in optimal color solids plotted in perceptually uniform color spaces, semichromes like red, green, violet, and magenta have a high chroma, while semichromes like yellow, orange, and cyan have a slightly lower chroma.
Cultural perspective
The meanings and associations of colors can play a major role in works of art, including literature.
Associations
Individual colors have a variety of cultural associations such as national colors (in general described in individual color articles and color symbolism). The field of color psychology attempts to identify the effects of color on human emotion and activity. Chromotherapy is a form of alternative medicine attributed to various Eastern traditions. Colors have different associations in different countries and cultures.
Different colors have been demonstrated to have effects on cognition. For example, researchers at the University of Linz in Austria demonstrated that the color red significantly decreases cognitive functioning in men. The combination of the colors red and yellow together can induce hunger, which has been capitalized on by a number of chain restaurants.
Color plays a role in memory development too. A photograph that is in black and white is slightly less memorable than one in color. Studies also show that wearing bright colors makes one more memorable to people they meet.
Terminology
Colors vary in several different ways, including hue (shades of red, orange, yellow, green, blue, and violet, etc.), saturation, brightness. Some color words are derived from the name of an object of that color, such as "orange" or "salmon", while others are abstract, like "red".
In the 1969 study Basic Color Terms: Their Universality and Evolution, Brent Berlin and Paul Kay describe a pattern in naming "basic" colors (like "red" but not "red-orange" or "dark red" or "blood red", which are "shades" of red). All languages that have two "basic" color names distinguish dark/cool colors from bright/warm colors. The next colors to be distinguished are usually red and then yellow or green. All languages with six "basic" colors include black, white, red, green, blue, and yellow. The pattern holds up to a set of twelve: black, gray, white, pink, red, orange, yellow, green, blue, purple, brown, and azure (distinct from blue in Russian and Italian, but not English).
Unusual colors
Some colors are objectively unusual or special. For example orpiment was a pigment used by painters in the 16th century, but is now considered dangerous due to arsenic. Sonoluminescence is a blue-purple created by the energy of sound waves from tiny bubbles in extreme experimental conditions, and was discovered in 1934.
| Physical sciences | Physics | null |
5926 | https://en.wikipedia.org/wiki/Computation | Computation | A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms.
Mechanical or electronic devices (or, historically, people) that perform computations are known as computers.
Computer science is an academic field that involves the study of computation.
Introduction
The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability.
Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation.
Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages.
Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements.
Some examples of mathematical statements that are computable include:
All statements characterised in modern programming languages, including C++, Python, and Java.
All calculations carried by an electronic computer, calculator or abacus.
All calculations carried out on an analytical engine.
All calculations carried out on a Turing Machine.
The majority of mathematical statements and calculations given in maths textbooks.
Some examples of mathematical statements that are not computable include:
Calculations or statements which are ill-defined, such that they cannot be unambiguously encoded into a Turing machine: ("Paul loves me twice as much as Joe").
Problem statements which do appear to be well-defined, but for which it can be proved that no Turing machine exists to solve them (such as the halting problem).
The Physical process of computation
Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Turing's 1937 proof, On Computable Numbers, with an Application to the Entscheidungsproblem, demonstrated that there is a formal equivalence between computable statements and particular physical systems, commonly called computers. Examples of such physical systems are: Turing machines, human mathematicians following strict rules, digital computers, mechanical computers, analog computers and others.
Alternative accounts of computation
The mapping account
An alternative account of computation is found throughout the works of Hilary Putnam and others. Peter Godfrey-Smith has dubbed this the "simple mapping account." Gualtiero Piccinini's summary of this account states that a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the "microphysical states [of the system] mirror the state transitions between the computational states."
The semantic account
Philosophers such as Jerry Fodor have suggested various accounts of computation with the restriction that semantic content be a necessary condition for computation (that is, what differentiates an arbitrary physical system from a computing system is that the operands of the computation represent something). This notion attempts to prevent the logical abstraction of the mapping account of pancomputationalism, the idea that everything can be said to be computing everything.
The mechanistic account
Gualtiero Piccinini proposes an account of computation based on mechanical philosophy. It states that physical computing systems are types of mechanisms that, by design, perform physical computation, or the manipulation (by a functional mechanism) of a "medium-independent" vehicle according to a rule. "Medium-independence" requires that the property can be instantiated by multiple realizers and multiple mechanisms, and that the inputs and outputs of the mechanism also be multiply realizable. In short, medium-independence allows for the use of physical variables with properties other than voltage (as in typical digital computers); this is imperative in considering other types of computation, such as that which occurs in the brain or in a quantum computer. A rule, in this sense, provides a mapping among inputs, outputs, and internal states of the physical computing system.
Mathematical models
In the theory of computation, a diversity of mathematical models of computation has been developed.
Typical mathematical models of computers are the following:
State models including Turing machine, pushdown automaton, finite-state automaton, and PRAM
Functional models including lambda calculus
Logical models including logic programming
Concurrent models including actor model and process calculi
Giunti calls the models studied by computation theory computational systems, and he argues that all of them are mathematical dynamical systems with discrete time and discrete state space. He maintains that a computational system is a complex object which consists of three parts. First, a mathematical dynamical system with discrete time and discrete state space; second, a computational setup , which is made up of a theoretical part , and a real part ; third, an interpretation , which links the dynamical system with the setup .
| Mathematics | Theoretical computer science | null |
5931 | https://en.wikipedia.org/wiki/Cycling | Cycling | Cycling, also known as bicycling or biking, is the activity of riding a bicycle or other type of cycle. It encompasses the use of human-powered vehicles such as balance bikes, unicycles, tricycles, and quadricycles. Cycling is practised around the world for purposes including transport, recreation, exercise, and competitive sport.
History
Cycling became popularized in Europe and North America in the latter part and especially the last decade of the 19th century. Today, over 50 percent of the human population knows how to ride a bike.
War
The bicycle has been used as a method of reconnaissance as well as transporting soldiers and supplies to combat zones. In this it has taken over many of the functions of horses in warfare. In the Second Boer War, both sides used bicycles for scouting. In World War I, France, Germany, Australia and New Zealand used bicycles to move troops. In its 1937 invasion of China, Japan employed some 50,000 bicycle troops, and similar forces were instrumental in Japan's march or "roll" through Malaya in World War II. Germany used bicycles again in World War II, while the British employed airborne "Cycle-commandos" with folding bikes.
In the Vietnam War, communist forces used bicycles extensively as cargo carriers along the Ho Chi Minh Trail.
The last country known to maintain a regiment of bicycle troops was Switzerland, which disbanded its last unit in 2003.
Equipment
In many countries, the most commonly used vehicle for road transport is a utility bicycle. These have frames with relaxed geometry, protecting the rider from shocks of the road and easing steering at low speeds. Utility bicycles tend to be equipped with accessories such as mudguards, pannier racks and lights, which extend their usefulness on a daily basis. Since the bicycle is so effective as a means of transportation, various companies have developed methods of carrying anything from the weekly shop to children on bicycles. Certain countries rely heavily on bicycles and their culture has developed around the bicycle as a primary form of transport. In Europe, Denmark and the Netherlands have the most bicycles per capita and most often use bicycles for everyday transport.
Road bikes tend to have a more upright shape and a shorter wheelbase, which make the bike more mobile but harder to ride slowly. The design, coupled with low or dropped handlebars, requires the rider to bend forward more, making use of stronger muscles (particularly the gluteus maximus) and reducing air resistance at high speed.
Road bikes are designed for speed and efficiency on paved roads. They are characterized by their lightweight frames, skinny tires, drop handlebars, and narrow saddles. Road bikes are ideal for racing, long-distance riding, and fitness training.
Other common types of bikes include gravel bikes, designed for use on gravel roads or trails, but with the ability to ride well on pavement; mountain bikes, which are designed for more rugged, undulating terrain; and e-bikes, which provide some level of motorized assist for the rider. There are additional variations of bikes and types of biking as well.
The price of a new bicycle can range from US$50 to more than US$20,000 (the highest priced bike in the world is the custom Madone by Damien Hirst, sold at US$500,000), depending on quality, type and weight (the most exotic road bicycles can weigh as little as 3.2 kg (7 lb)). However, UCI regulations stipulate a legal race bike cannot weigh less than 6.8 kg (14.99 lbs). Being measured for a bike and taking it for a test ride are recommended before buying.
The drivetrain components of the bike should also be considered. A middle grade dérailleur is sufficient for a beginner, although many utility bikes are equipped with hub gears. If the rider plans a significant amount of hillclimbing, a triple-chainrings crankset gear system may be preferred. Otherwise, the relatively lighter, simpler, and less expensive double chainring is preferred, even on high-end race bikes. Much simpler fixed wheel bikes are also available.
Many road bikes, along with mountain bikes, include clipless pedals to which special shoes attach, via a cleat, enabling the rider to pull on the pedals as well as push. Other possible accessories for the bicycle include front and rear lights, bells or horns, child carrying seats, cycling computers with GPS, locks, bar tape, fenders (mud-guards), baggage racks, baggage carriers and pannier bags, water bottles and bottle cages.
For basic maintenance and repairs cyclists can carry a pump (or a CO2 cartridge), a puncture repair kit, a spare inner tube, and tire levers and a set of allen keys. Cycling can be more efficient and comfortable with special shoes, gloves, and shorts. In wet weather, riding can be more tolerable with waterproof clothes, such as cape, jacket, trousers (pants) and overshoes and high-visibility clothing is advisable to reduce the risk from motor vehicle users.
Items legally required in some jurisdictions, or voluntarily adopted for safety reasons, include bicycle helmets, generator or battery operated lights, reflectors, and audible signalling devices such as a bell or horn. Extras include studded tires and a bicycle computer.
Bikes can also be heavily customized, with different seat designs and handle bars, for example. Gears can also be customized to better suit the rider's strength in relation to the terrain.
Skills
Many schools and police departments run educational programs to instruct children in bicycle handling skills, especially to introduce them to the rules of the road as they apply to cyclists. In some countries these may be known as bicycle rodeos, or operated as schemes such as Bikeability in the UK. Education for adult cyclists is available from organizations such as the League of American Bicyclists.
Beyond simply riding, another skill is riding efficiently and safely in traffic. One popular approach to riding in motor vehicle traffic is vehicular cycling, occupying road space as car does. Alternately, in countries such as Denmark and the Netherlands, where cycling is popular, cyclists are often segregated into bike lanes at the side of, or more often separate from, main highways and roads. Many primary schools participate in the national road test in which children individually complete a circuit on roads near the school while being observed by testers.
Infrastructure
Cyclists, pedestrians and motorists make different demands on road design which may lead to conflicts. Some jurisdictions give priority to motorized traffic, for example setting up one-way street systems, free-right turns, high capacity roundabouts, and slip roads. Others share priority with cyclists so as to encourage more cycling by applying varying combinations of traffic calming measures to limit the impact of motorized transport, and by building bike lanes, bike paths and cycle tracks. The provision of cycling infrastructure varies widely between cities and countries, particularly since cycling for transportation almost entirely occurs in public streets. And, the development of computer vision and street view imagery has provided significant potential to assess infrastructure for cyclists.
In jurisdictions where motor vehicles were given priority, cycling has tended to decline while in jurisdictions where cycling infrastructure was built, cycling rates have remained steady or increased. Occasionally, extreme measures against cycling may occur. In Shanghai, where bicycles were once the dominant mode of transport, bicycle travel on a few city roads was banned temporarily in December 2003.
In areas in which cycling is popular and encouraged, cycle-parking facilities using bicycle stands, lockable mini-garages, and patrolled cycle parks are used to reduce theft. Local governments promote cycling by permitting bicycles to be carried on public transport or by providing external attachment devices on public transport vehicles. Conversely, an absence of secure cycle-parking is a recurring complaint by cyclists from cities with low modal share of cycling.
Extensive cycling infrastructure may be found in some cities. Such dedicated paths in some cities often have to be shared with in-line skaters, scooters, skateboarders, and pedestrians. Dedicated cycling infrastructure is treated differently in the law of every jurisdiction, including the question of liability of users in a collision. There is also some debate about the safety of the various types of separated facilities.
Bicycles are considered a sustainable mode of transport, especially suited for urban use and relatively shorter distances when used for transport (compared to recreation). Case studies and good practices (from European cities and some worldwide examples) that promote and stimulate this kind of functional cycling in cities can be found at Eltis, Europe's portal for local transport.
A number of cities, including Paris, London and Barcelona, now have successful bike hire schemes designed to help people cycle in the city. Typically these feature utilitarian city bikes which lock into docking stations, released on payment for set time periods. Costs vary from city to city. In London, initial hire access costs £2 per day. The first 30 minutes of each trip is free, with £2 for each additional 30 minutes until the bicycle is returned.
In the Netherlands, many roads have one or two separate cycleways alongside them, or cycle lanes marked on the road. On roads where adjacent bike paths or cycle tracks exist, the use of these facilities is compulsory, and cycling on the main carriageway is not permitted. Some 35,000 km of cycle-track has been physically segregated from motor traffic, equal to a quarter of the country's entire 140,000 km road network. A quarter of all trips in the country are made on bicycles, one quarter of them to work. Even the prime minister goes to work by bicycle, when weather permits. This saves the lives of 6,000 citizens per year, prolongs life expectancy by 6 months, saves the country 20 million dollars per year, and prevents 150 grams of from being emitted per kilometer of cycling.
Types
Utility
Utility cycling refers both to cycling as a mode of daily commuting transport as well as the use of a bicycle in a commercial activity, mainly to transport goods, mostly accomplished in an urban environment.
The postal services of many countries have long relied on bicycles. The British Royal Mail first started using bicycles in 1880; now bicycle delivery fleets include 37,000 in the UK, 25,700 in Germany, 10,500 in Hungary and 7000 in Sweden. In Australia, Australia Post has also reintroduced bicycle postal deliveries on some routes due to an inability to recruit sufficient licensed riders willing to use their uncomfortable motorbikes. The London Ambulance Service has recently introduced bicycling paramedics, who can often get to the scene of an incident in Central London more quickly than a motorized ambulance.
The use of bicycles by police has been increasing, since they provide greater accessibility to bicycle and pedestrian zones and allow access when roads are congested. In some cases, bicycle officers have been used as a supplement or a replacement for horseback officers.
Bicycles enjoy substantial use as general delivery vehicles in many countries. In the UK and North America, as their first jobs, generations of teenagers have worked at delivering newspapers by bicycle. London has many delivery companies that use bicycles with trailers. Most cities in the West, and many outside it, support a sizeable and visible industry of cycle couriers who deliver documents and small packages. In India, many of Mumbai's Dabbawalas use bicycles to deliver home cooked lunches to the city's workers. In Bogotá, Colombia the city's largest bakery recently replaced most of its delivery trucks with bicycles. Even the car industry uses bicycles. At the huge Mercedes-Benz factory in Sindelfingen, Germany workers use bicycles, color-coded by department, to move around the factory.
Recreational
Bicycle touring
Bicycles are used for recreation at all ages. Bicycle touring, also known as cyclotourism, involves touring and exploration or sightseeing by bicycle for leisure. Bicycle tourism has been one of the most popular sports for recreational benefit. A brevet or randonnée is an organized long-distance ride.
One popular Dutch pleasure is the enjoyment of relaxed cycling in the countryside of the Netherlands. The land is very flat and full of public bicycle trails and cycle tracks where cyclists are not bothered by cars and other traffic, which makes it ideal for cycling recreation. Many Dutch people subscribe every year to an event called fietsvierdaagse — four days of organised cycling through the local environment. Paris–Brest–Paris (PBP), which began in 1891, is the oldest bicycling event still run on a regular basis on the open road, covers over and imposes a 90-hour time limit. Similar if smaller institutions exist in many countries.
A study conducted in Taiwan improved the environmental quality for bicyclist tourists which demonstrated greater health benefits in tourists and even in natives. The number of bicyclists in Taiwan increased from 700,000 in 2008 to 5.1 million in 2017. Thus, this resulted in more and safer bicycle routes to be established. When cycling, cyclists take into account the safety on the road, bicycle lanes, smooth roads, diverse scenery, and ride length. Thus, the environment plays a huge role in people's decision factor to use bicycle touring more. This study used many questionnaires and conducted statistical analysis to come up with the conclusion of cyclists' top 5 factors that they consider before making a decision to bike are: safety, lighting facility, design of lanes, the surrounding landscape, and how clean the environment is. Thus, after improving these 5 factors, they found much more recreational benefits to bicycle tourism.
Organized rides
Many cycling clubs hold organized rides in which bicyclists of all levels participate. The typical organized ride starts with a large group of riders, called the mass, bunch or even peloton. This will thin out over the course of the ride. Many riders choose to ride together in groups of the same skill level to take advantage of drafting.
Most organized rides, for example cyclosportives (or gran fondos), Challenge Rides or reliability trials, and hill climbs include registration requirements and will provide information either through the mail or online concerning start times and other requirements. Rides usually consist of several different routes, sorted by mileage, and with a certain number of rest stops that usually include refreshments, first aid and maintenance tools. Routes can vary by as much as .
Some organized rides are entirely social events. One example is the monthly San Jose Bike Party which can reach attendance of one to two thousand riders in Summer months.
Mountain
Mountain biking began in the 1970s, originally as a downhill sport, practised on customized cruiser bicycles around Mount Tamalpais. Most mountain biking takes place on dirt roads, trails and in purpose-built parks. Downhill mountain biking has just evolved in the recent years and is performed at places such as Whistler Mountain Bike Park. Slopestyle, a form of downhill, is when riders do tricks such as tailwhips, 360s, backflips and front flips.
There are several disciplines of mountain biking besides downhill, including: cross country (often referred to as XC), all mountain, trail, free ride, and newly popular enduro.
In 2020, due to COVID-19, mountain bikes saw a surge in popularity in the US, with some vendors reporting that they were sold out of bikes under US$1000.
Other
The Marching and Cycling Band HHK from Haarlem (the Netherlands) is one of the few marching bands around the world which also performs on bicycles.
Racing
Shortly after the introduction of bicycles, competitions developed independently in many parts of the world. Early races involving boneshaker style bicycles were predictably fraught with injuries. Large races became popular during the 1890s "Golden Age of Cycling", with events across Europe, and in the U.S. and Japan as well. At one point, almost every major city in the US had a velodrome or two for track racing events, however since the middle of the 20th century cycling has become a minority sport in the US whilst in Continental Europe it continues to be a major sport, particularly in the United Kingdom, France, Belgium, Italy and Spain. The most famous of all bicycle races is the Tour de France. This began in 1903, and continues to capture the attention of the sporting world.
In 1899, Charles Minthorn Murphy became the first man to ride his bicycle a mile in under a minute (hence his nickname, Mile-a-Minute Murphy), which he did by drafting a locomotive at New York's Long Island.
As the bicycle evolved its various forms, different racing formats developed. Road races may involve both team and individual competition, and are contested in various ways. They range from the one-day road race, criterium, and time trial to multi-stage events like the Tour de France and its sister events which make up cycling's Grand Tours. Recumbent bicycles were banned from bike races in 1934 after Marcel Berthet set a new hour record in his Velodyne streamliner (49.992 km on 18 November 1933). Track bicycles are used for track cycling in Velodromes, while cyclo-cross races are held on outdoor terrain, including pavement, grass, and mud. Cyclocross races feature human-made features such as small barriers which riders either bunny hop over or dismount and walk over. Time trial races, another form of road racing require a rider to ride against the clock. Time trials can be performed as a team or as a single rider. Bikes are changed for time trial races, using aero bars. In the past decade, mountain bike racing has also reached international popularity and is even an Olympic sport.
Professional racing organizations place limitations on the bicycles that can be used in the races that they sanction. For example, the Union Cycliste Internationale, the governing body of international cycle sport (which sanctions races such as the Tour de France), decided in the late 1990s to create additional rules which prohibit racing bicycles weighing less than 6.8 kilograms (14.96 pounds). The UCI rules also effectively ban some bicycle frame innovations (such as the recumbent bicycle) by requiring a double triangle structure.
Activism
Many broad and correlated themes run in bicycle activism: one is about advocating the bicycle as an alternative mode of transport, and another is about the creation of conditions to permit and/or encourage bicycle use, both for utility and recreational cycling. Although the first emphasizes the potential for energy and resource conservation and health benefits gained from cycling versus automobile use, is relatively undisputed, the second is the subject of much debate.
It is generally agreed that improved local and inter-city rail services and other methods of mass transportation (including greater provision for cycle carriage on such services) create conditions to encourage bicycle use. However, there are different opinions on the role of various types of cycling infrastructure in building bicycle-friendly cities and roads.
Some bicycle activists (including some traffic management advisers) seek the construction of bike paths, cycle tracks and bike lanes for journeys of all lengths and point to their success in promoting safety and encouraging more people to cycle. Some activists, especially those from the vehicular cycling tradition, view the safety, practicality, and intent of such facilities with suspicion. They favor a more holistic approach based on the 4 'E's: education (of everyone involved), encouragement (to apply the education), enforcement (to protect the rights of others), and engineering (to facilitate travel while respecting every person's equal right to do so). Some groups offer training courses to help cyclists integrate themselves with other traffic.
Critical Mass is an event typically held on the last Friday of every month in cities around the world where bicyclists take to the streets en masse. While the ride was founded with the idea of drawing attention to how unfriendly the city was to bicyclists, the leaderless structure of Critical Mass makes it impossible to assign it any one specific goal. In fact, the purpose of Critical Mass is not formalized beyond the direct action of meeting at a set location and time and traveling as a group through city streets.
There is a long-running cycle helmet debate among activists. The most heated controversy surrounds the topic of compulsory helmet use.
It is paradoxical that in many developing countries cycling is in decline as bicycles are replaced by motorbikes and cars, while in many developed countries cycling is on the rise.
Equality
Within western societies the demographic of those who cycle is often not representative of broader society. Research by TfL in London, UK, suggests that cyclists in London are typically 'white, under 40, male, with medium to high household income.' Studies from large-scale representative data from Germany show that people with higher levels of education cycle substantially more often than those with lower levels of education. Even for trips of the same distance and among people from the same city with the same income level, those with higher education cycle more. As a result, there are various forms of activism focused on diversifying the cycling community. Inspired by the Black Lives Matter movement are organizations such as Street Riders NYC that seek to protest while on bicycles about systemic racism and police brutality. An incidental experience for Street Riders NYC protest participants is the inequity in where safe bicycling infrastructure exists by neighbourhood, which is interpreted as a form of classism within cycling and urbanism. The bicycle has acted as a means for women's liberation and thus has links to feminism.
Associations
Cyclists form associations, both for specific interests (trails development, road maintenance, bike maintenance, urban design, racing clubs, touring clubs, etc.) and for more global goals (energy conservation, pollution reduction, promotion of fitness). Some bicycle clubs and national associations became prominent advocates for improvements to roads and highways. In the United States, the League of American Wheelmen lobbied for the improvement of roads in the last part of the 19th century, founding and leading the national Good Roads Movement. Their model for political organization, as well as the paved roads for which they argued, facilitated the growth of the automobile.
In Europe, the European Cyclists' Federation represents around 70 local, regional and national civil society organisations across more than 40 countries that work to promote cycling as a mode of transport and leisure.
As a sport, cycling is governed internationally by the Union Cycliste Internationale in Switzerland, USA Cycling (merged with the United States Cycling Federation in 1995) in the United States, (for upright bicycles) and by the International Human Powered Vehicle Association (for other HPVs, or human-powered vehicles). Cycling for transport and touring is promoted on a European level by the European Cyclists' Federation, with associated members from Great Britain, Japan and elsewhere. Regular conferences on cycling as transport are held under the auspices of Velo City; global conferences are coordinated by Velo Mondial.
Cycling as a means of transportation
Cycling is widely regarded as an effective and efficient mode of transportation optimal for short to moderate distances.
Bicycles provide numerous possible benefits in comparison with motor vehicles, including the sustained physical exercise involved in cycling, easier parking, increased maneuverability, and access to roads, bike paths and rural trails. Cycling also offers a reduced consumption of fossil fuels, less air and noise pollution, reduced greenhouse gas emissions, and greatly reduced traffic congestion. These have a lower financial cost for users as well as for society at large (negligible damage to roads, less road area required). By fitting bicycle racks on the front of buses, transit agencies can significantly increase the areas they can serve.
Among the disadvantages of cycling are the requirement of bicycles (excepting tricycles or quadricycles) for the rider to have certain level of basic skill to remain upright, the reduced protection in crashes in comparison to motor vehicles, often longer travel time (except in densely populated areas), vulnerability to weather conditions, difficulty in transporting passengers, and the fact that a basic level of fitness is required for cycling moderate to long distances.
Health effects
Cycling provides a variety of health benefits and reduces the risk of cancers, heart disease, and diabetes that are prevalent in sedentary lifestyles. Cycling on stationary bikes have also been used as part of rehabilitation for lower limb injuries, particularly after hip surgery. Individuals who cycle regularly have also reported mental health improvements, including less perceived stress and better vitality.
The health benefits of cycling outweigh the risks, when cycling is compared to a sedentary lifestyle. A Dutch study found that cycling can extend lifespans by up to 14 months, but the risks equated to a reduced lifespan of 40 days or less. Mortality rate reduction was found to be directly correlated to the average time spent cycling, totaling to approximately 6500 deaths prevented by cycling. Cycling in the Netherlands is often safer than in other parts of the world, so the risk-benefit ratio will be different in other regions. Overall, benefits of cycling or walking have been shown to exceed risks by ratios of 9:1 to 96:1 when compared with no exercise at all, including a wide variety of physical and mental outcomes.
Exercise
The physical exercise gained from cycling is generally linked with increased health and well-being. According to the World Health Organization (WHO), physical inactivity is second only to tobacco smoking as a health risk in developed countries, and is associated with 20-30% increased risk of various cancers, heart disease, and diabetes and tens of billions of dollars of healthcare costs. The WHO's 2009 report suggests that increasing physical activity is a public health "best buy", and that cycling is a "highly suitable activity" for this purpose. The charity Sustrans reports that investment in cycling provision can give a 20:1 return from health and other benefits. It has been estimated that, on average, approximately 20 life-years are gained from the health benefits of road bicycling for every life-year lost through injury.
Bicycles are often used by people seeking to improve their fitness and cardiovascular health. Recent studies on the use of cycling for commutes have shown that it reduces the risk of cardiovascular outcomes by 11%, with slightly more risk reduction in women than in men. In addition, cycling is especially helpful for those with arthritis of the lower limbs who are unable to pursue sports that cause impact to the knees and other joints. Since cycling can be used for the practical purpose of transportation, there can be less need for self-discipline to exercise.
Cycling while seated is a relatively non-weight bearing exercise that, like swimming, does little to promote bone density. Cycling up and out of the saddle, on the other hand, does a better job by transferring more of the rider's body weight to the legs. However, excessive cycling while standing can cause knee damage It used to be thought that cycling while standing was less energy efficient, but recent research has proven this not to be true. Other than air resistance, there is no wasted energy from cycling while standing, if it is done correctly.
Cycling on a stationary cycle is frequently advocated as a suitable exercise for rehabilitation, particularly for lower limb injury, owing to the low impact which it has on the joints. In particular, cycling is commonly used within knee rehabilitation programs, to strengthen the quadriceps muscles with minimal stress on the knee ligaments. Further stress of the knee can be relieved by changing seat heights and pedal position to improve the rehabilitation. Cycling is also used for rehabilitation after hip surgery to manage soft-tissue healing, control swelling and pain, and allow a larger range of motion to the nearby muscles earlier during recovery. As a result, many institutions have established a rehabilitation protocol that involves stationary cycling as part of the recovery process. One such protocol offered by Mayo Clinic recommends 2–4 weeks of cycling on an upright stationary bike following hip arthroscopy, starting from 5 minutes per session and slowly increasing to 30 minutes per session. The goal of these sessions are to reduce joint inflammation and maintain the widest range of motion possible with limited pain.
As a response to the increased global sedentary lifestyles and consequent overweight and obesity, one response that has been adopted by many organizations concerned with health and environment is the promotion of Active travel, which seeks to promote walking and cycling as safe and attractive alternatives to motorized transport. Given that many journeys are for relatively short distances, there is considerable scope to replace car use with walking or cycling, though in many settings this may require some infrastructure modification, particularly to attract the less experienced and confident.
An Italian study assessed the impact of cycling for commute on major non-communicable diseases and public healthcare costs. Using a health economic assessment model, the study found a lower incidence of type 2 diabetes, acute myocardial infarction, and stroke in individuals that cycled compared to those that did not actively commute. This model estimated that public healthcare costs would reduce by 5% over a 10-year period.
Illinois designated cycling as its official state exercise in 2007.
Mental health
The effects of cycling on overall mental health have often been studied. A European study surveying participants from seven cities about self-perceived health based on primary modes of transportation reported favorable results in the bicycle use population. The bicycle use group reported predominantly good self-perceived health, less perceived stress, better mental health, better vitality, and less loneliness. The study attributed these results to possible economic benefits and senses of both independence and identity as a member of a cyclist community. An English study recruiting non-cyclist older adults aged 50 to 83 to participate as either conventional pedal bike cyclists, electrically assisted e-bike cyclists, or a non-cyclist control group in outdoor trails measured cognitive function through executive function, spatial reasoning, and memory tests and well-being through questionnaires. The study did not find significant differences in spatial reasoning or memory tests. It did, however, find that both cyclists groups had improved executive function and well-being, both with greater improvement in the e-bike group. This suggested that non-physical factors of cycling such as independence, engagement with the outdoor environment, and mobility play a greater role in improving mental health.
A 15-month randomized controlled trial in the U.S. examined the impact of self-paced cycling on cognitive function in institutionalized older adults without cognitive impairment. Researchers used three cognitive assessments: Mini-Mental State Examination (MMSE), Fuld object memory evaluation, and symbol digit modality test. The study found that long-term cycling for at least 15 minutes per day in older adults without cognitive impairment had a protective effect on cognition and attention.
Cycling has also been shown to be effective adjunct therapy in certain mental health conditions.
Bicycle safety
Cycling suffers from a perception that it is unsafe. This perception is not always backed by hard numbers, because of under reporting of crashes and lack of bicycle use data (amount of cycling, kilometers cycled) which make it hard to assess the risk and monitor changes in risks.
In the UK, fatality rates per mile or kilometre are slightly less than those for walking. In the US, bicycling fatality rates are less than 2/3 of those walking the same distance. However, in the UK for example the fatality and serious injury rates per hour of travel are just over double for cycling than those for walking.
Despite the risk factors associated with bicycling, cyclists have a lower overall mortality rate when compared to other groups. A Danish study in 2000 found that even after adjustment for other risk factors, including leisure time physical activity, those who did not cycle to work experienced a 39% higher mortality rate than those who did.
Injuries (to cyclists, from cycling) can be divided into two types:
Physical trauma (extrinsic)
Overuse (intrinsic)
Physical trauma
Acute physical trauma includes injuries to the head and extremities resulting from falls and collisions. Most cycle deaths result from a collision with a car or heavy goods vehicle. Drivers are at fault in the majority of these crashes. Segregated cycling infrastructure reduces the rate of crashes between bicycles and motor vehicles.
Although a majority of bicycle collisions occur during the day, bicycle lighting is recommended for safety when bicycling at night to increase visibility.
Overuse injuries
Of a study of 518 cyclists, a large majority reported at least one overuse injury, with over one third requiring medical treatment. The most common injury sites were the neck (48.8%) and the knees (41.7%), as well as the groin/buttocks (36.1%), hands (31.1%), and back (30.3%). Women were more likely to suffer from neck and shoulder pain than men.
Many cyclists suffer from overuse injuries to the knees, affecting cyclists at all levels. These are caused by many factors:
Incorrect bicycle fit or adjustment, particularly the saddle.
Incorrect adjustment of clipless pedals.
Too many hills, or too many miles, too early in the training season.
Poor training preparation for long touring rides.
Selecting too high a gear. A lower gear for uphill climb protects the knees, even though muscles may be well able to handle a higher gear.
Overuse injuries, including chronic nerve damage at weight bearing locations, can occur as a result of repeatedly riding a bicycle for extended periods of time. Damage to the ulnar nerve in the palm, carpal tunnel in the wrist, the genitourinary tract or bicycle seat neuropathy may result from overuse. Recumbent bicycles are designed on different ergonomic principles and eliminate pressure from the saddle and handlebars, due to the relaxed riding position.
Note that overuse is a relative term, and capacity varies greatly between individuals. Someone starting out in cycling must be careful to increase length and frequency of cycling sessions slowly, starting for example at an hour or two per day, or a hundred miles or kilometers per week. Bilateral muscular pain is a normal by-product of the training process, whereas unilateral pain may reveal "exercise-induced arterial endofibrosis". Joint pain and numbness are also early signs of overuse injury.
A Spanish study of top triathletes found those who cover more than 186 miles (300 km) a week on their bikes have less than 4% normal looking sperm, where normal adult males would be expected to have from 15% to 20%.
Saddle related
Much work has been done to investigate optimal bicycle saddle shape, size and position, and negative effects of extended use of less than optimal seats or configurations.
Excessive saddle height can cause posterior knee pain, while setting the saddle too low can cause pain in the anterior of the knee. An incorrectly fitted saddle may eventually lead to muscle imbalance. A 25 to 35-degree knee angle is recommended to avoid an overuse injury.
Although cycling is beneficial to health, men can be negatively affected by cycling more than three hours a week due to the significant weight on their perineum, an area located between the scrotum and the anus which hold some of the nerves and arteries that pass to the penis. This weight for continuous hours a week can cause men to experience numbness or tingling which can lead to them losing the ability to achieve an erection due to reduced blood flow; which 13% of males did experience in a study by Norwegian researchers who gathered data from 160 men participating in a long-distance bike tour. Fitting a proper sized seat can prevent this effect. In extreme cases, pudendal nerve entrapment can be a source of intractable perineal pain. Some cyclists with induced pudendal nerve pressure neuropathy gained relief from improvements in saddle position and riding techniques.
The National Institute for Occupational Safety and Health (NIOSH) has investigated the potential health effects of prolonged bicycling in police bicycle patrol units, including the possibility that some bicycle saddles exert excessive pressure on the urogenital area of cyclists, restricting blood flow to the genitals. Their study found that using bicycle seats without protruding noses reduced pressure on the groin by at least 65% and significantly reduced the number of cases of urogenital paresthesia. A follow-up found that 90% of bicycle officers who tried the no-nose seat were using it six months later. NIOSH recommends that riders use a no-nose bicycle seat for workplace bicycling.
Despite rumors to the contrary, there is no scientific evidence linking cycling with testicular cancer.
Exposure to air pollution
One concern is that riding in traffic may expose the cyclist to higher levels of air pollution, especially cyclists regularly traveling on or along busy roads. Some authors have claimed this to be untrue, showing that the pollutant and irritant count within cars is consistently higher, presumably because of limited circulation of air within the car and due to the air intake being directly in the stream of other traffic. Other authors have found small or inconsistent differences in concentrations but claim that exposure of cyclists is higher due to increased minute ventilation and is associated with minor biological changes. A 2010 study estimated that the gained life expectancy from the health benefits of cycling (approximately 3–14 months gained) greatly exceeded the lost life expectancy from air pollution (approximately 0.8–40 days lost). However, a systematic review comparing the effects of air pollution exposure on the health of cyclists was conducted, but the authors concluded that the differing methodologies and measuring parameters of each study made it difficult to compare results and suggested a more holistic approach was needed to accomplish this. The significance of the associated health effect, if any, is unclear but probably much smaller than the health impacts associated with accidents and the health benefits derived from additional physical activity.
| Technology | Road transport | null |
5932 | https://en.wikipedia.org/wiki/Carbohydrate | Carbohydrate | A carbohydrate () is a biomolecule consisting of carbon (C), hydrogen (H) and oxygen (O) atoms, usually with a hydrogen–oxygen atom ratio of 2:1 (as in water) and thus with the empirical formula (where m may or may not be different from n), which does not mean the H has covalent bonds with O (for example with , H has a covalent bond with C but not with O). However, not all carbohydrates conform to this precise stoichiometric definition (e.g., uronic acids, deoxy-sugars such as fucose), nor are all chemicals that do conform to this definition automatically classified as carbohydrates (e.g., formaldehyde and acetic acid).
The term is most common in biochemistry, where it is a synonym of saccharide (), a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose, which was originally taken from the word glucose (), and is used for almost all sugars (e.g., fructose (fruit sugar), sucrose (cane or beet sugar), ribose, lactose (milk sugar)).
Carbohydrates perform numerous roles in living organisms. Polysaccharides serve as an energy store (e.g., starch and glycogen) and as structural components (e.g., cellulose in plants and chitin in arthropods and fungi). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g., ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development.
Carbohydrates are central to nutrition and are found in a wide variety of natural and processed foods. Starch is a polysaccharide and is abundant in cereals (wheat, maize, rice), potatoes, and processed food based on cereal flour, such as bread, pizza or pasta. Sugars appear in human diet mainly as table sugar (sucrose, extracted from sugarcane or sugar beets), lactose (abundant in milk), glucose and fructose, both of which occur naturally in honey, many fruits, and some vegetables. Table sugar, milk, or honey is often added to drinks and many prepared foods such as jam, biscuits and cakes.
Cellulose, a polysaccharide found in the cell walls of all plants, is one of the main components of insoluble dietary fiber. Although it is not digestible by humans, cellulose and insoluble dietary fiber generally help maintain a healthy digestive system by facilitating bowel movements. Other polysaccharides contained in dietary fiber include resistant starch and inulin, which feed some bacteria in the microbiota of the large intestine, and are metabolized by these bacteria to yield short-chain fatty acids.
Terminology
In scientific literature, the term "carbohydrate" has many synonyms, like "sugar" (in the broad sense), "saccharide", "ose", "glucide", "hydrate of carbon" or "polyhydroxy compounds with aldehyde or ketone". Some of these terms, especially "carbohydrate" and "sugar", are also used with other meanings.
In food science and in many informal contexts, the term "carbohydrate" often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts). This informality is sometimes confusing since it confounds chemical structure and digestibility in humans.
The term "carbohydrate" (or "carbohydrate by difference") refers also to dietary fiber, which is a carbohydrate, but, unlike sugars and starches, fibers are not hydrolyzed by human digestive enzymes. Fiber generally contributes little food energy in humans, but is often included in the calculation of total food energy. The fermentation of soluble fibers by gut microflora can yield short-chain fatty acids, and soluble fiber is estimated to provide about 2 kcal/g.
History
The history of the discovery regarding carbohydrates dates back around 10,000 years ago in Papua New Guinea during the cultivation of sugarcane during the Neolithic agricultural revolution. The term "carbohydrate" was first proposed by German chemist Carl Schmidt (chemist) in 1844. In 1856, glycogen, a form of carbohydrate storage in animal livers, was discovered by French physiologist Claude Bernard.
Structure
Formerly the name "carbohydrate" was used in chemistry for any compound with the formula Cm (H2O)n. Following this definition, some chemists considered formaldehyde (CH2O) to be the simplest carbohydrate, while others claimed that title for glycolaldehyde. Today, the term is generally understood in the biochemistry sense, which excludes compounds with only one or two carbons and includes many biological carbohydrates which deviate from this formula. For example, while the above representative formulas would seem to capture the commonly known carbohydrates, ubiquitous and abundant carbohydrates often deviate from this. For example, carbohydrates often display chemical groups such as: N-acetyl (e.g., chitin), sulfate (e.g., glycosaminoglycans), carboxylic acid and deoxy modifications (e.g., fucose and sialic acid).
Natural saccharides are generally built of simple carbohydrates called monosaccharides with general formula (CH2O)n where n is three or more. A typical monosaccharide has the structure H–(CHOH)x(C=O)–(CHOH)y–H, that is, an aldehyde or ketone with many hydroxyl groups added, usually one on each carbon atom that is not part of the aldehyde or ketone functional group. Examples of monosaccharides are glucose, fructose, and glyceraldehydes. However, some biological substances commonly called "monosaccharides" do not conform to this formula (e.g., uronic acids and deoxy-sugars such as fucose) and there are many chemicals that do conform to this formula but are not considered to be monosaccharides (e.g., formaldehyde CH2O and inositol (CH2O)6).
The open-chain form of a monosaccharide often coexists with a closed ring form where the aldehyde/ketone carbonyl group carbon (C=O) and hydroxyl group (–OH) react forming a hemiacetal with a new C–O–C bridge.
Monosaccharides can be linked together into what are called polysaccharides (or oligosaccharides) in a large variety of ways. Many carbohydrates contain one or more modified monosaccharide units that have had one or more groups replaced or removed. For example, deoxyribose, a component of DNA, is a modified version of ribose; chitin is composed of repeating units of N-acetyl glucosamine, a nitrogen-containing form of glucose.
Division
Carbohydrates are polyhydroxy aldehydes, ketones, alcohols, acids, their simple derivatives and their polymers having linkages of the acetal type. They may be classified according to their degree of polymerization, and may be divided initially into three principal groups, namely sugars, oligosaccharides and polysaccharides.
Monosaccharides
Monosaccharides are the simplest carbohydrates in that they cannot be hydrolyzed to smaller carbohydrates. They are aldehydes or ketones with two or more hydroxyl groups. The general chemical formula of an unmodified monosaccharide is (C•H2O)n, literally a "carbon hydrate". Monosaccharides are important fuel molecules as well as building blocks for nucleic acids. The smallest monosaccharides, for which n=3, are dihydroxyacetone and D- and L-glyceraldehydes.
Classification of monosaccharides
The α and β anomers of glucose. Note the position of the hydroxyl group (red or green) on the anomeric carbon relative to the CH2OH group bound to carbon 5: they either have identical absolute configurations (R,R or S,S) (α), or opposite absolute configurations (R,S or S,R) (β).
Monosaccharides are classified according to three different characteristics: the placement of its carbonyl group, the number of carbon atoms it contains, and its chiral handedness. If the carbonyl group is an aldehyde, the monosaccharide is an aldose; if the carbonyl group is a ketone, the monosaccharide is a ketose. Monosaccharides with three carbon atoms are called trioses, those with four are called tetroses, five are called pentoses, six are hexoses, and so on. These two systems of classification are often combined. For example, glucose is an aldohexose (a six-carbon aldehyde), ribose is an aldopentose (a five-carbon aldehyde), and fructose is a ketohexose (a six-carbon ketone).
Each carbon atom bearing a hydroxyl group (-OH), with the exception of the first and last carbons, are asymmetric, making them stereo centers with two possible configurations each (R or S). Because of this asymmetry, a number of isomers may exist for any given monosaccharide formula. Using Le Bel-van't Hoff rule, the aldohexose D-glucose, for example, has the formula (C·H2O)6, of which four of its six carbons atoms are stereogenic, making D-glucose one of 24=16 possible stereoisomers. In the case of glyceraldehydes, an aldotriose, there is one pair of possible stereoisomers, which are enantiomers and epimers. 1, 3-dihydroxyacetone, the ketose corresponding to the aldose glyceraldehydes, is a symmetric molecule with no stereo centers. The assignment of D or L is made according to the orientation of the asymmetric carbon furthest from the carbonyl group: in a standard Fischer projection if the hydroxyl group is on the right the molecule is a D sugar, otherwise it is an L sugar. The "D-" and "L-" prefixes should not be confused with "d-" or "l-", which indicate the direction that the sugar rotates plane polarized light. This usage of "d-" and "l-" is no longer followed in carbohydrate chemistry.
Ring-straight chain isomerism
The aldehyde or ketone group of a straight-chain monosaccharide will react reversibly with a hydroxyl group on a different carbon atom to form a hemiacetal or hemiketal, forming a heterocyclic ring with an oxygen bridge between two carbon atoms. Rings with five and six atoms are called furanose and pyranose forms, respectively, and exist in equilibrium with the straight-chain form.
During the conversion from straight-chain form to the cyclic form, the carbon atom containing the carbonyl oxygen, called the anomeric carbon, becomes a stereogenic center with two possible configurations: The oxygen atom may take a position either above or below the plane of the ring. The resulting possible pair of stereoisomers is called anomers. In the α anomer, the -OH substituent on the anomeric carbon rests on the opposite side (trans) of the ring from the CH2OH side branch. The alternative form, in which the CH2OH substituent and the anomeric hydroxyl are on the same side (cis) of the plane of the ring, is called the β anomer.
Use in living organisms
Monosaccharides are the major fuel source for metabolism, and glucose is an energy-rich molecule utilized to generate ATP in almost all living organisms. Glucose is a high-energy substrate produced in plants through photosynthesis by combining energy-poor water and carbon dioxide in an endothermic reaction fueled by solar energy. When monosaccharides are not immediately needed, they are often converted to more space-efficient (i.e., less water-soluble) forms, often polysaccharides. In animals, glucose circulating the blood is a major metabolic substrate and is oxidized in the mitochondria to produce ATP for performing useful cellular work. In humans and other animals, serum glucose levels must be regulated carefully to maintain glucose within acceptable limits and prevent the deleterious effects of hypo- or hyperglycemia. Hormones such as insulin and glucagon serve to keep glucose levels in balance: insulin stimulates glucose uptake into the muscle and fat cells when glucose levels are high, whereas glucagon helps to raise glucose levels if they dip too low by stimulating hepatic glucose synthesis. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight.
Disaccharides
Two joined monosaccharides are called a disaccharide, the simplest kind of polysaccharide. Examples include sucrose and lactose. They are composed of two monosaccharide units bound together by a covalent bond known as a glycosidic linkage formed via a dehydration reaction, resulting in the loss of a hydrogen atom from one monosaccharide and a hydroxyl group from the other. The formula of unmodified disaccharides is C12H22O11. Although there are numerous kinds of disaccharides, a handful of disaccharides are particularly notable.
Sucrose, pictured to the right, is the most abundant disaccharide, and the main form in which carbohydrates are transported in plants. It is composed of one D-glucose molecule and one D-fructose molecule. The systematic name for sucrose, O-α-D-glucopyranosyl-(1→2)-D-fructofuranoside, indicates four things:
Its monosaccharides: glucose and fructose
Their ring types: glucose is a pyranose and fructose is a furanose
How they are linked together: the oxygen on carbon number 1 (C1) of α-D-glucose is linked to the C2 of D-fructose.
The -oside suffix indicates that the anomeric carbon of both monosaccharides participates in the glycosidic bond.
Lactose, a disaccharide composed of one D-galactose molecule and one D-glucose molecule, occurs naturally in mammalian milk. The systematic name for lactose is O-β-D-galactopyranosyl-(1→4)-D-glucopyranose. Other notable disaccharides include maltose (two D-glucoses linked α-1,4) and cellobiose (two D-glucoses linked β-1,4). Disaccharides can be classified into two types: reducing and non-reducing disaccharides. If the functional group is present in bonding with another sugar unit, it is called a reducing disaccharide or biose.
Oligosaccharides and polysaccharides
Oligosaccharides
Oligosaccharides are saccharide polymers composed of three to ten units of monosaccharides, connected via glycosidic linkages, similar to disaccharides. They are usually linked to lipids or amino acids glycosic linkage with oxygen or nitrogen to form glygolipids and glycoproteins, though some, like the raffinose series and the fructooligosaccharides, do not. They have roles in cell recognition and cell adhesion.
Polysaccharides
Nutrition
Carbohydrate consumed in food yields 3.87 kilocalories of energy per gram for simple sugars, and 3.57 to 4.12 kilocalories per gram for complex carbohydrate in most other foods. Relatively high levels of carbohydrate are associated with processed foods or refined foods made from plants, including sweets, cookies and candy, table sugar, honey, soft drinks, breads and crackers, jams and fruit products, pastas and breakfast cereals. Refined carbohydrates from processed foods such as white bread or rice, soft drinks, and desserts are readily digestible, and many are known to have a high glycemic index, which reflects a rapid assimilation of glucose. By contrast, the digestion of whole, unprocessed, fiber-rich foods such as beans, peas, and whole grains produces a slower and steadier release of glucose and energy into the body. Animal-based foods generally have the lowest carbohydrate levels, although milk does contain a high proportion of lactose.
Organisms typically cannot metabolize all types of carbohydrate to yield energy. Glucose is a nearly universal and accessible source of energy. Many organisms also have the ability to metabolize other monosaccharides and disaccharides but glucose is often metabolized first. In Escherichia coli, for example, the lac operon will express enzymes for the digestion of lactose when it is present, but if both lactose and glucose are present, the lac operon is repressed, resulting in the glucose being used first (see: Diauxie). Polysaccharides are also common sources of energy. Many organisms can easily break down starches into glucose; most organisms, however, cannot metabolize cellulose or other polysaccharides such as chitin and arabinoxylans. These carbohydrate types can be metabolized by some bacteria and protists. Ruminants and termites, for example, use microorganisms to process cellulose, fermenting it to caloric short-chain fatty acids. Even though humans lack the enzymes to digest fiber, dietary fiber represents an important dietary element for humans. Fibers promote healthy digestion, help regulate postprandial glucose and insulin levels, reduce cholesterol levels, and promote satiety.
The Institute of Medicine recommends that American and Canadian adults get between 45 and 65% of dietary energy from whole-grain carbohydrates. The Food and Agriculture Organization and World Health Organization jointly recommend that national dietary guidelines set a goal of 55–75% of total energy from carbohydrates, but only 10% directly from sugars (their term for simple carbohydrates). A 2017 Cochrane Systematic Review concluded that there was insufficient evidence to support the claim that whole grain diets can affect cardiovascular disease.
Classification
The term complex carbohydrate was first used in the U.S. Senate Select Committee on Nutrition and Human Needs publication Dietary Goals for the United States (1977) where it was intended to distinguish sugars from other carbohydrates (which were perceived to be nutritionally superior). However, the report put "fruit, vegetables and whole-grains" in the complex carbohydrate column, despite the fact that these may contain sugars as well as polysaccharides. The standard usage, however, is to classify carbohydrates chemically: simple if they are sugars (monosaccharides and disaccharides) and complex if they are polysaccharides (or oligosaccharides). Carbohydrates are sometimes divided into "available carbohydrates", which are absorbed in the small intestine and "unavailable carbohydrates", which pass to the large intestine, where they are subject to fermentation by the gastrointestinal microbiota.
Glycemic index
The glycemic index (GI) and glycemic load concepts characterize the potential for carbohydrates in food to raise blood glucose compared to a reference food (generally pure glucose). Expressed numerically as GI, carbohydrate-containing foods can be grouped as high-GI (score more than 70), moderate-GI (56-69), or low-GI (less than 55) relative to pure glucose (GI=100). Consumption of carbohydrate-rich, high-GI foods causes an abrupt increase in blood glucose concentration that declines rapidly following the meal, whereas low-GI foods with lower carbohydrate content produces a lower blood glucose concentration that returns gradually after the meal.
Glycemic load is a measure relating the quality of carbohydrates in a food (low- vs. high-carbohydrate content – the GI) by the amount of carbohydrates in a single serving of that food.
Health effects of dietary carbohydrate restriction
Low-carbohydrate diets may miss the health advantages – such as increased intake of dietary fiber and phytochemicals – afforded by high-quality plant foods such as legumes and pulses, whole grains, fruits, and vegetables. A "meta-analysis, of moderate quality," included as adverse effects of the diet halitosis, headache and constipation.
Carbohydrate-restricted diets can be as effective as low-fat diets in helping achieve weight loss over the short term when overall calorie intake is reduced. An Endocrine Society scientific statement said that "when calorie intake is held constant [...] body-fat accumulation does not appear to be affected by even very pronounced changes in the amount of fat vs carbohydrate in the diet." In the long term, low-carbohydrate diets do not appear to confer a "metabolic advantage," and effective weight loss or maintenance depends on the level of calorie restriction, not the ratio of macronutrients in a diet. The reasoning of diet advocates that carbohydrates cause undue fat accumulation by increasing blood insulin levels, but a more balanced diet that restricts refined carbohydrates can also reduce serum glucose and insulin levels and may also suppress lipogenesis and promote fat oxidation. However, as far as energy expenditure itself is concerned, the claim that low-carbohydrate diets have a "metabolic advantage" is not supported by clinical evidence. Further, it is not clear how low-carbohydrate dieting affects cardiovascular health, although two reviews showed that carbohydrate restriction may improve lipid markers of cardiovascular disease risk.
Carbohydrate-restricted diets are no more effective than a conventional healthy diet in preventing the onset of type 2 diabetes, but for people with type 2 diabetes, they are a viable option for losing weight or helping with glycemic control. There is limited evidence to support routine use of low-carbohydrate dieting in managing type 1 diabetes. The American Diabetes Association recommends that people with diabetes should adopt a generally healthy diet, rather than a diet focused on carbohydrate or other macronutrients.
An extreme form of low-carbohydrate diet – the ketogenic diet – is established as a medical diet for treating epilepsy. Through celebrity endorsement during the early 21st century, it became a fad diet as a means of weight loss, but with risks of undesirable side effects, such as low energy levels and increased hunger, insomnia, nausea, and gastrointestinal discomfort. The British Dietetic Association named it one of the "top 5 worst celeb diets to avoid in 2018".
Sources
Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California.
The carbohydrate value is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the "dietary fiber".
Metabolism
Carbohydrate metabolism is the series of biochemical processes responsible for the formation, breakdown and interconversion of carbohydrates in living organisms.
The most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 16 kJ (4 kcal) of energy, while the oxidation of one gram of lipids yields about 38 kJ (9 kcal). The human body stores between 300 and 500 g of carbohydrates depending on body weight, with the skeletal muscle contributing to a large portion of the storage. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of anaerobic and aerobic respiration metabolize glucose and oxygen (aerobic) to release energy, with carbon dioxide and water as byproducts.
Catabolism
Catabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle.
In glycolysis, oligo- and polysaccharides are cleaved first to smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units can then enter into monosaccharide catabolism. A 2 ATP investment is required in the early steps of glycolysis to phosphorylate Glucose to Glucose 6-Phosphate (G6P) and Fructose 6-Phosphate (F6P) to Fructose 1,6-biphosphate (FBP), thereby pushing the reaction forward irreversibly. In some cases, as with humans, not all carbohydrate types are usable as the digestive and metabolic enzymes necessary are not present.
Carbohydrate chemistry
Carbohydrate chemistry is a large and economically important branch of organic chemistry. Some of the main organic reactions that involve carbohydrates are:
Amadori rearrangement
Carbohydrate acetalisation
Carbohydrate digestion
Cyanohydrin reaction
Koenigs–Knorr reaction
Lobry de Bruyn–Van Ekenstein transformation
Nef reaction
Wohl degradation
Tipson-Cohen reaction
Ferrier rearrangement
Ferrier II reaction
Chemical synthesis
Carbohydrate synthesis is a sub-field of organic chemistry concerned specifically with the generation of natural and unnatural carbohydrate structures. This can include the synthesis of monosaccharide residues or structures containing more than one monosaccharide, known as oligosaccharides. Selective formation of glycosidic linkages and selective reactions of hydroxyl groups are very important, and the usage of protecting groups is extensive.
Common reactions for glycosidic bond formation are as follows:
Chemical glycosylation
Fischer glycosidation
Koenigs-Knorr reaction
Crich beta-mannosylation
While some common protection methods are as below:
Carbohydrate acetalisation
Trimethylsilyl
Benzyl ether
p-Methoxybenzyl ether
| Biology and health sciences | Chemistry | null |
5933 | https://en.wikipedia.org/wiki/CSS%20Virginia | CSS Virginia | CSS Virginia was the first steam-powered ironclad warship built by the Confederate States Navy during the first year of the American Civil War; she was constructed as a casemate ironclad using the razéed (cut down) original lower hull and engines of the scuttled steam frigate . Virginia was one of the participants in the Battle of Hampton Roads, opposing the Union's in March 1862. The battle is chiefly significant in naval history as the first battle between ironclads.
USS Merrimack becomes CSS Virginia
When the Commonwealth of Virginia seceded from the Union in 1861, one of the important US military bases threatened was Gosport Navy Yard (now Norfolk Naval Shipyard) in Portsmouth, Virginia. Accordingly, orders were sent to destroy the base rather than allow it to fall into Confederate hands. On the afternoon of 17 April, the day Virginia seceded, Engineer in Chief B. F. Isherwood managed to get the frigate's engines lit. However, the previous night secessionists had sunk light boats between Craney Island and Sewell's Point, blocking the channel. On 20 April, before evacuating the Navy Yard, the U. S. Navy burned Merrimack to the waterline and sank her to preclude capture. When the Confederate government took possession of the fully provisioned yard, the base's new commander, Flag Officer French Forrest, contracted on May 18 to salvage the wreck of the frigate. This was completed by May 30, and she was towed into the shipyard's only dry dock (today known as Drydock Number One), where the burned structures were removed.
The wreck was surveyed and her lower hull and machinery were discovered to be undamaged. Stephen Mallory, Secretary of the Navy decided to convert Merrimack into an ironclad, since she was the only large ship with intact engines available in the Chesapeake Bay area. Preliminary sketch designs were submitted by Lieutenants John Mercer Brooke and John L. Porter, each of whom envisaged the ship as a casemate ironclad. Brooke's general design showed the bow and stern portions submerged, and his design was the one finally selected. The detailed design work would be completed by Porter, who was a trained naval constructor.Porter had overall responsibility for the conversion, but Brooke was responsible for her iron plate and heavy ordnance, while William P. Williamson, Chief Engineer of the Navy, was responsible for the ship's machinery.
Reconstruction as an ironclad
The hull's burned timbers were cut down past the vessel's original waterline, leaving just enough clearance to accommodate her large, twin-bladed screw propeller. A new fantail and armored casemate were built atop a new main deck, and a v-shaped (bulwark) was added to her bow, which attached to the armored casemate. This forward and aft main deck and fantail were designed to stay submerged and were covered in iron plate, built up in two layers. The casemate was built of of oak and pine in several layers, topped with two layers of iron plating oriented perpendicular to each other, and angled at 36 degrees from horizontal to deflect fired enemy shells.
From reports in Northern newspapers, Virginias designers were aware of the Union plans to build an ironclad and assumed their similar ordnance would be unable to do much serious damage to such a ship. It was decided to equip their ironclad with a ram, an anachronism on a 19th-century warship. Merrimack'''s steam engines, now part of Virginia, were in poor working order; they had been slated for replacement when the decision was made to abandon the Norfolk naval yard. The salty Elizabeth River water and the addition of tons of iron armor and pig iron ballast, added to the hull's unused spaces for needed stability after her initial refloat, and to submerge her unarmored lower levels, only added to her engines' propulsion issues. As completed, Virginia had a turning radius of about and required 45 minutes to complete a full circle, which would later prove to be a major handicap in battle with the far more nimble Monitor.
The ironclad's casemate had 14 gun ports, three each in the bow and stern, one firing directly along the ship's centerline, the two others angled at 45° from the center line; these six bow and stern gun ports had exterior iron shutters installed to protect their cannon. There were four gun ports on each broadside; their protective iron shutters remained uninstalled during both days of the Battle of Hampton Roads. Virginias battery consisted of four muzzle-loading single-banded Brooke rifles and six smoothbore Dahlgren guns salvaged from the old Merrimack. Two of the rifles, the bow and stern pivot guns, were caliber and weighed each. They fired a shell. The other two were cannon of about , one on each broadside. The 9-inch Dahlgrens were mounted three to a side; each weighed approximately and could fire a shell up to a range of (or 1.9 miles) at an elevation of 15°. Both amidship Dahlgrens nearest the boiler furnaces were fitted-out to fire heated shot. On her upper casemate deck were positioned two anti-boarding/personnel 12-pounder Howitzers.Virginias commanding officer, Flag Officer Franklin Buchanan, arrived to take command only a few days before her first sortie; the ironclad was placed in commission and equipped by her executive officer, Lieutenant Catesby ap Roger Jones.
Battle of Hampton Roads
The Battle of Hampton Roads began on March 8, 1862, when Virginia engaged the blockading Union fleet. Despite an all-out effort to complete her, the new ironclad still had workmen on board when she sailed into Hampton Roads with her flotilla of five CSN support ships: (serving as Virginias tender) and , , , and .
The first Union ship to be engaged by Virginia was the all-wood, sail-powered , which was first crippled during a furious cannon exchange, and then rammed in her forward starboard bow by Virginia. As Cumberland began to sink, the port side half of Virginias iron ram was broken off, causing a bow leak in the ironclad. Seeing what had happened to Cumberland, the captain of ordered his frigate into shallower water, where she soon grounded. Congress and Virginia traded cannon fire for an hour, after which the badly-damaged Congress finally surrendered. While the surviving crewmen of Congress were being ferried off the ship, a Union battery on the north shore opened fire on Virginia. Outraged at such a breach of war protocol, in retaliation Virginias now angry captain, Commodore Franklin Buchanan, gave the order to open fire with hot-shot on the surrendered Congress as he rushed to Virginias exposed upper casemate deck, where he was injured by enemy rifle fire. Congress, now set ablaze by the retaliatory shelling, burned for many hours into the night, a symbol of Confederate naval power and a costly wake-up call for the all-wood Union blockading squadron.Virginia did not emerge from the battle unscathed, however. Her hanging port side anchor was lost after ramming Cumberland; the bow was leaking from the loss of the ram's port side half; shot from Cumberland, Congress, and the shore-based Union batteries had riddled her smokestack, reducing her boilers' draft and already slow speed; two of her broadside cannon (without shutters) were put out of commission by shell hits; a number of her armor plates had been loosened; both of Virginias cutters had been shot away, as had both 12-pounder anti-boarding/anti-personnel howitzers, most of the deck stanchions, railings, and both flagstaffs. Even so, the now-injured Buchanan ordered an attack on , which had run aground on a sandbar trying to escape Virginia. However, because of the ironclad's draft (fully loaded), she was unable to get close enough to do any significant damage. It being late in the day, Virginia retired from the conflict with the expectation of returning the next day and completing the destruction of the remaining Union blockaders.
Later that night, arrived at Union-held Fort Monroe. She had been rushed to Hampton Roads, still not quite complete, all the way from the Brooklyn Navy Yard, in hopes of defending the force of wooden ships and preventing "the rebel monster" from further threatening the Union's blockading fleet and nearby cities, like Washington, D.C. While under tow, she nearly foundered twice during heavy storms on her voyage south, arriving in Hampton Roads by the bright firelight from the still-burning triumph of Virginias first day of handiwork.
The next day, on March 9, 1862, the world's first battle between ironclads took place. The smaller, nimbler, and faster Monitor was able to outmaneuver the larger, slower Virginia, but neither ship proved able to do any severe damage to the other, despite numerous shell hits by both combatants, many fired at virtually point-blank range. Monitor had a much lower freeboard and only its single, rotating, two-cannon gun turret and forward pilothouse sitting above her deck, and thus was much harder to hit with Virginias heavy cannon. After hours of shell exchanges, Monitor finally retreated into shallower water after a direct shell hit to her armored pilothouse forced her away from the conflict to assess the damage. The captain of the Monitor, Lieutenant John L. Worden, had taken a direct gunpowder explosion to his face and eyes, blinding him, while looking through the pilothouse's narrow, horizontal viewing slits. Monitor remained in the shallows, but as it was late in the day, Virginia steamed for her home port, the battle ending without a clear victor. The captain of Virginia that day, Lieutenant Catesby ap Roger Jones, received advice from his pilots to depart over the sandbar toward Norfolk until the next day. Lieutenant Jones wanted to continue the fight, but the pilots emphasized that the Virginia had "nearly three miles to run to the bar" and that she could not remain and "take the ground on a falling tide." To prevent running aground, Lieutenant Jones reluctantly moved the ironclad back toward port. Virginia retired to the Gosport Naval Yard at Portsmouth, Virginia, and remained in drydock for repairs until April 4, 1862.
In the following month, the crew of Virginia were unsuccessful in their attempts to break the Union blockade. The blockade had been bolstered by the hastily ram-fitted paddle steamer , and SS Illinois as well as the and , which had been repaired. Virginia made several sorties back over to Hampton Roads hoping to draw Monitor into battle. Monitor, however, was under strict orders not to re-engage; the two combatants would never battle again.
On April 11, the Confederate Navy sent Lieutenant Joseph Nicholson Barney, in command of the paddle side-wheeler , along with Virginia and five other ships in full view of the Union squadron, enticing them to fight. When it became clear that Union Navy ships were unwilling to fight, the CS Navy squadron moved in and captured three merchant ships, the brigs Marcus and Sabout and the schooner Catherine T. Dix. Their ensigns were then hoisted "Union-side down" to further taunt the Union Navy into a fight, as they were towed back to Norfolk, with the help of .
By late April, the new Union ironclads USRC E. A. Stevens and had also joined the blockade. On May 8, 1862, Virginia and the James River Squadron ventured out when the Union ships began shelling the Confederate fortifications near Norfolk, but the Union ships retired under the shore batteries on the north side of the James River and on Rip Raps island.
Destruction
On May 10, 1862, advancing Union troops occupied Norfolk. Since Virginia was now a steam-powered heavy battery and no longer an ocean-going cruiser, her pilots judged her not seaworthy enough to enter the Atlantic, even if she were able to pass the Union blockade. Virginia was also unable to retreat further up the James River due to her deep draft (fully loaded). In an attempt to reduce it, supplies and coal were dumped overboard, even though this exposed the ironclad's unarmored lower hull, but this was still not enough to make a difference. Without a home port and no place to go, Virginias new captain, flag officer Josiah Tattnall III, reluctantly ordered her destruction in order to keep the ironclad from being captured. The ship was destroyed by Catesby Jones and John Taylor Wood, who set fire to scattered gunpowder and cotton strewn across the ship's deck. Early on the morning of May 11, 1862, off Craney Island, the fire reached the ironclad's magazine, leading to a massive explosion that obliterated the ship. What remained of Virginia then sank to the harbor floor.
After the war, the government determined that the wreck of Virginia needed to be removed from the channel. In 1867, Captain D. A. Underdown salvaged 290,000 pounds of iron from the site, much of which was taken from the ship's ram and cannons. The following year, Underdown detonated explosives under the Virginia's hulk to fully clear the river, but the attempt did not totally remove the wreck. In 1871, E.J. Griffith recovered an additional 102,883 pounds of iron from the seabed, and in 1876, the "remaining timbers" of the ship were raised. In 1982, the National Underwater and Marine Agency explored the area around Craney Island and found that "there are no large areas of either concentrated or scattered debris associated with the Virginia lying on the river bottom within the survey area."
Most of the recovered iron was melted down and sold for scrap (notably, some of the ship's iron was used to craft Pokahuntas Bell in 1907).Richmond Times-Dispatch, "Pokahuntas Bell for Exposition ", April 13, 1907 Other pieces of the ship have been preserved in museums: The ship's brass bell is held at the Hampton Roads Naval Museum, and one of the Virginia's anchors now rests in front of the American Civil War Museum in Richmond. Numerous souvenirs, ostensibly made from salvaged iron and wood raised from Virginias sunken hulk, have found a ready and willing market among Civil War enthusiasts and eastern seaboard residents. However, the provenance of many of these artifacts is impossible to prove, which has given rise to the humorous adage that "if you took all the iron and all the wood supposedly collected from the [wreck of the CSS Virginia], you'd have enough to outfit a fleet of ironclads."
Historical names
Although the Confederacy renamed the ship, she is still frequently referred to by her Union name. When she was first commissioned into the United States Navy in 1856, her name was Merrimack, with the K; the name was derived from the Merrimack River near where she was built. She was the second ship of the U. S. Navy to be named for the Merrimack River, which is formed by the confluence of the Pemigewasset and Winnipesaukee rivers at Franklin, New Hampshire. The Merrimack flows south across New Hampshire, then eastward across northeastern Massachusetts before finally emptying in the Atlantic at Newburyport, Massachusetts.
After raising, restoring, and outfitting as an ironclad warship, the Confederacy bestowed on her the name Virginia. Nonetheless, the Union continued to refer to the Confederate ironclad by either its original name, Merrimack, or by the nickname "The Rebel Monster". In the aftermath of the Battle of Hampton Roads, the names Virginia and Merrimack were used interchangeably by both sides, as attested to by various newspapers and correspondence of the day. Navy reports and pre-1900 historians frequently misspelled the name as "Merrimac", which was actually an unrelated ship, hence "the Battle of the Monitor and the Merrimac". Both spellings are still in use in the Hampton Roads area.
Memorial, heritage
A large exhibit at the Jamestown Exposition held in 1907 at Sewell's Point was the "Battle of the Merrimac and Monitor," a large diorama that was housed in a special building.
A small community in Montgomery County, Virginia, near where the coal burned by the Confederate ironclad was mined, is now known as Merrimac.
The name of the Monitor-Merrimac Memorial Bridge-Tunnel, built in Hampton Roads in the general vicinity of the famous engagement, with both Virginia and federal funds.
| Technology | Specific seacraft | null |
5936 | https://en.wikipedia.org/wiki/Chemical%20thermodynamics | Chemical thermodynamics | Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the spontaneity of processes.
The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics.
History
In 1865, the German physicist Rudolf Clausius, in his Mechanical Theory of Heat, suggested that the principles of thermochemistry, e.g. the heat evolved in combustion reactions, could be applied to the principles of thermodynamics. Building on the work of Clausius, between the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper On the Equilibrium of Heterogeneous Substances. In these papers, Gibbs showed how the first two laws of thermodynamics could be measured graphically and mathematically to determine both the thermodynamic equilibrium of chemical reactions as well as their tendencies to occur or proceed. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius and Sadi Carnot.
During the early 20th century, two major publications successfully applied the principles developed by Gibbs to chemical processes and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the term free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry.
Overview
The primary objective of chemical thermodynamics is the establishment of a criterion for determination of the feasibility or spontaneity of a given transformation. In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes:
Chemical reactions
Phase changes
The formation of solutions
The following state functions are of primary concern in chemical thermodynamics:
Internal energy (U)
Enthalpy (H)
Entropy (S)
Gibbs free energy (G)
Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions.
The three laws of thermodynamics (global, unspecific forms):
1. The energy of the universe is constant.
2. In any spontaneous process, there is always an increase in entropy of the universe.
3. The entropy of a perfect crystal (well ordered) at 0 Kelvin is zero.
Chemical energy
Chemical energy is the energy that can be released when chemical substances undergo a transformation through a chemical reaction. Breaking and making chemical bonds involves energy release or uptake, often as heat that may be either absorbed by or evolved from the chemical system.
Energy released (or absorbed) because of a reaction between chemical substances ("reactants") is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical system. It can be calculated from , the internal energy of formation of the reactant molecules related to the bond energies of the molecules under consideration, and , the internal energy of formation of the product molecules. The change in internal energy is equal to the heat change if it is measured under conditions of constant volume (at STP condition), as in a closed rigid container such as a bomb calorimeter. However, at constant pressure, as in reactions in vessels open to the atmosphere, the measured heat is usually not equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the widely tabulated enthalpies of formation are used.)
A related term is the heat of combustion, which is the chemical energy released due to a combustion reaction and of interest in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized, its energy release is similar (though assessed differently than for a hydrocarbon fuel — see food energy).
In chemical thermodynamics, the term used for the chemical potential energy is chemical potential, and sometimes the Gibbs-Duhem equation is used.
Chemical reactions
In most cases of interest in chemical thermodynamics there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy in the universe unless they are at equilibrium or are maintained at a "running equilibrium" through "quasi-static" changes by being coupled to constraining devices, such as pistons or electrodes, to deliver and receive external work. Even for homogeneous "bulk" systems, the free-energy functions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If the quantities { Ni }, the number of chemical species, are omitted from the formulae, it is impossible to describe compositional changes.
Gibbs function or Gibbs Energy
For an unstructured, homogeneous "bulk" system, there are still various extensive compositional variables { Ni } that G depends on, which specify the composition (the amounts of each chemical substance, expressed as the numbers of molecules present or the numbers of moles). Explicitly,
For the case where only PV work is possible,
a restatement of the fundamental thermodynamic relation, in which μi is the chemical potential for the i-th component in the system
The expression for dG is especially useful at constant T and P, conditions, which are easy to achieve experimentally and which approximate the conditions in living creatures
Chemical affinity
While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components ( Ni ) can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind.
Consequently, we introduce an explicit variable to represent the degree of advancement of a process, a progress variable ξ for the extent of reaction (Prigogine & Defay, p. 18; Prigogine, pp. 4–7; Guggenheim, p. 37.62), and to the use of the partial derivative ∂G/∂ξ (in place of the widely used "ΔG", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of dG on chemical reactions (or other processes). If there is just one reaction
If we introduce the stoichiometric coefficient for the i-th component in the reaction
(negative for reactants), which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial derivative
where we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Théophile de Donder in 1923.(De Donder; Progogine & Defay, p. 69; Guggenheim, pp. 37, 240) The minus sign ensures that in a spontaneous change, when the change in the Gibbs free energy of the process is negative, the chemical species have a positive affinity for each other. The differential of G takes on a simple form that displays its dependence on composition change
If there are a number of chemical reactions going on simultaneously, as is usually the case,
with a set of reaction coordinates { ξj }, avoiding the notion that the amounts of the components ( Ni ) can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while they are negative when chemical reactions proceed at a finite rate, producing entropy. This can be made even more explicit by introducing the reaction rates dξj/dt. For every physically independent process (Prigogine & Defay, p. 38; Prigogine, p. 24)
This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot "know" whether temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (−T times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See Constraints below.)
We now relax the requirement of a homogeneous "bulk" system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the equality for dG is now replaced by
or
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other also does. The coupling may occasionally be rigid, but it is often flexible and variable.
Solutions
In solution chemistry and biochemistry, the Gibbs free energy decrease (∂G/∂ξ, in molar units, denoted cryptically by ΔG) is commonly used as a surrogate for (−T times) the global entropy produced by spontaneous chemical reactions in situations where no work is being done; or at least no "useful" work; i.e., other than perhaps ± P dV. The assertion that all spontaneous reactions have a negative ΔG is merely a restatement of the second law of thermodynamics, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When no useful work is being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions −F/T and −G/T, respectively.
Non-equilibrium
Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields.
The non-equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager's relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures.
Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment.
The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples.
System constraints
In this regard, it is crucial to understand the role of walls and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogeneous, isotropic bulk systems which can deliver only PdV work to the outside world, but applies even to the most structured systems. There are complex systems with many chemical "reactions" going on at the same time, some of which are really only parts of the same, overall process. An independent process is one that could proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a "thought experiment" in chemical kinetics, but actual examples exist.
A gas-phase reaction at constant temperature and pressure which results in an increase in the number of molecules will lead to an increase in volume. Inside a cylinder closed with a piston, it can proceed only by doing work on the piston. The extent variable for the reaction can increase only if the piston moves out, and conversely if the piston is pushed inward, the reaction is driven backwards.
Similarly, a redox reaction might occur in an electrochemical cell with the passage of current through a wire connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to flow. The current might be dissipated as Joule heating, or it might in turn run an electrical device like a motor doing mechanical work. An automobile lead-acid battery can be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent process. Some, perhaps most, of the Gibbs free energy of reaction may be delivered as external work.
The hydrolysis of ATP to ADP and phosphate can drive the force-times-distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in mitochondria and chloroplasts, which involves the transport of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a piston, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is flowing. There is usually a coupling coefficient, which may depend on relative rates, which determines what percentage of the driving free energy is turned into external work, or captured as "chemical work", a misnomer for the free energy of another chemical process.
| Physical sciences | Thermodynamics | Chemistry |
5962 | https://en.wikipedia.org/wiki/Comet | Comet | A comet is an icy, small Solar System body that warms and begins to release gases when passing close to the Sun, a process called outgassing. This produces an extended, gravitationally unbound atmosphere or coma surrounding the nucleus, and sometimes a tail of gas and dust gas blown out from the coma. These phenomena are due to the effects of solar radiation and the outstreaming solar wind plasma acting upon the nucleus of the comet. Comet nuclei range from a few hundred meters to tens of kilometers across and are composed of loose collections of ice, dust, and small rocky particles. The coma may be up to 15 times Earth's diameter, while the tail may stretch beyond one astronomical unit. If sufficiently close and bright, a comet may be seen from Earth without the aid of a telescope and can subtend an arc of up to 30° (60 Moons) across the sky. Comets have been observed and recorded since ancient times by many cultures and religions.
Comets usually have highly eccentric elliptical orbits, and they have a wide range of orbital periods, ranging from several years to potentially several millions of years. Short-period comets originate in the Kuiper belt or its associated scattered disc, which lie beyond the orbit of Neptune. Long-period comets are thought to originate in the Oort cloud, a spherical cloud of icy bodies extending from outside the Kuiper belt to halfway to the nearest star. Long-period comets are set in motion towards the Sun by gravitational perturbations from passing stars and the galactic tide. Hyperbolic comets may pass once through the inner Solar System before being flung to interstellar space. The appearance of a comet is called an apparition.
Extinct comets that have passed close to the Sun many times have lost nearly all of their volatile ices and dust and may come to resemble small asteroids. Asteroids are thought to have a different origin from comets, having formed inside the orbit of Jupiter rather than in the outer Solar System. However, the discovery of main-belt comets and active centaur minor planets has blurred the distinction between asteroids and comets. In the early 21st century, the discovery of some minor bodies with long-period comet orbits, but characteristics of inner solar system asteroids, were called Manx comets. They are still classified as comets, such as C/2014 S3 (PANSTARRS). Twenty-seven Manx comets were found from 2013 to 2017.
, there are 4,584 known comets. However, this represents a very small fraction of the total potential comet population, as the reservoir of comet-like bodies in the outer Solar System (in the Oort cloud) is about one trillion. Roughly one comet per year is visible to the naked eye, though many of those are faint and unspectacular. Particularly bright examples are called "great comets". Comets have been visited by uncrewed probes such as NASA's Deep Impact, which blasted a crater on Comet Tempel 1 to study its interior, and the European Space Agency's Rosetta, which became the first to land a robotic spacecraft on a comet.
Etymology
The word comet derives from the Old English from the Latin or . That, in turn, is a romanization of the Greek 'wearing long hair', and the Oxford English Dictionary notes that the term () already meant 'long-haired star, comet' in Greek. was derived from () 'to wear the hair long', which was itself derived from () 'the hair of the head' and was used to mean 'the tail of a comet'.
The astronomical symbol for comets (represented in Unicode) is , consisting of a small disc with three hairlike extensions.
Physical characteristics
Nucleus
The solid, core structure of a comet is known as the nucleus. Cometary nuclei are composed of an amalgamation of rock, dust, water ice, and frozen carbon dioxide, carbon monoxide, methane, and ammonia. As such, they are popularly described as "dirty snowballs" after Fred Whipple's model. Comets with a higher dust content have been called "icy dirtballs". The term "icy dirtballs" arose after observation of Comet 9P/Tempel 1 collision with an "impactor" probe sent by NASA Deep Impact mission in July 2005. Research conducted in 2014 suggests that comets are like "deep fried ice cream", in that their surfaces are formed of dense crystalline ice mixed with organic compounds, while the interior ice is colder and less dense.
The surface of the nucleus is generally dry, dusty or rocky, suggesting that the ices are hidden beneath a surface crust several metres thick. The nuclei contains a variety of organic compounds, which may include methanol, hydrogen cyanide, formaldehyde, ethanol, ethane, and perhaps more complex molecules such as long-chain hydrocarbons and amino acids. In 2009, it was confirmed that the amino acid glycine had been found in the comet dust recovered by NASA's Stardust mission. In August 2011, a report, based on NASA studies of meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine, and related organic molecules) may have been formed on asteroids and comets.
The outer surfaces of cometary nuclei have a very low albedo, making them among the least reflective objects found in the Solar System. The Giotto space probe found that the nucleus of Halley's Comet (1P/Halley) reflects about four percent of the light that falls on it, and Deep Space 1 discovered that Comet Borrelly's surface reflects less than 3.0%; by comparison, asphalt reflects seven percent. The dark surface material of the nucleus may consist of complex organic compounds. Solar heating drives off lighter volatile compounds, leaving behind larger organic compounds that tend to be very dark, like tar or crude oil. The low reflectivity of cometary surfaces causes them to absorb the heat that drives their outgassing processes.
Comet nuclei with radii of up to have been observed, but ascertaining their exact size is difficult. The nucleus of 322P/SOHO is probably only in diameter. A lack of smaller comets being detected despite the increased sensitivity of instruments has led some to suggest that there is a real lack of comets smaller than across. Known comets have been estimated to have an average density of . Because of their low mass, comet nuclei do not become spherical under their own gravity and therefore have irregular shapes.
Roughly six percent of the near-Earth asteroids are thought to be the extinct nuclei of comets that no longer experience outgassing, including 14827 Hypnos and 3552 Don Quixote.
Results from the Rosetta and Philae spacecraft show that the nucleus of 67P/Churyumov–Gerasimenko has no magnetic field, which suggests that magnetism may not have played a role in the early formation of planetesimals. Further, the ALICE spectrograph on Rosetta determined that electrons (within above the comet nucleus) produced from photoionization of water molecules by solar radiation, and not photons from the Sun as thought earlier, are responsible for the degradation of water and carbon dioxide molecules released from the comet nucleus into its coma. Instruments on the Philae lander found at least sixteen organic compounds at the comet's surface, four of which (acetamide, acetone, methyl isocyanate and propionaldehyde) have been detected for the first time on a comet.
Coma
The streams of dust and gas thus released form a huge and extremely thin atmosphere around the comet called the "coma". The force exerted on the coma by the Sun's radiation pressure and solar wind cause an enormous "tail" to form pointing away from the Sun.
The coma is generally made of water and dust, with water making up to 90% of the volatiles that outflow from the nucleus when the comet is within 3 to 4 astronomical units (450,000,000 to 600,000,000 km; 280,000,000 to 370,000,000 mi) of the Sun. The parent molecule is destroyed primarily through photodissociation and to a much smaller extent photoionization, with the solar wind playing a minor role in the destruction of water compared to photochemistry. Larger dust particles are left along the comet's orbital path whereas smaller particles are pushed away from the Sun into the comet's tail by light pressure.
Although the solid nucleus of comets is generally less than across, the coma may be thousands or millions of kilometers across, sometimes becoming larger than the Sun. For example, about a month after an outburst in October 2007, comet 17P/Holmes briefly had a tenuous dust atmosphere larger than the Sun. The Great Comet of 1811 had a coma roughly the diameter of the Sun. Even though the coma can become quite large, its size can decrease about the time it crosses the orbit of Mars around from the Sun. At this distance the solar wind becomes strong enough to blow the gas and dust away from the coma, and in doing so enlarging the tail. Ion tails have been observed to extend one astronomical unit (150 million km) or more.
Both the coma and tail are illuminated by the Sun and may become visible when a comet passes through the inner Solar System, the dust reflects sunlight directly while the gases glow from ionisation. Most comets are too faint to be visible without the aid of a telescope, but a few each decade become bright enough to be visible to the naked eye. Occasionally a comet may experience a huge and sudden outburst of gas and dust, during which the size of the coma greatly increases for a period of time. This happened in 2007 to Comet Holmes.
In 1996, comets were found to emit X-rays. This greatly surprised astronomers because X-ray emission is usually associated with very high-temperature bodies. The X-rays are generated by the interaction between comets and the solar wind: when highly charged solar wind ions fly through a cometary atmosphere, they collide with cometary atoms and molecules, "stealing" one or more electrons from the atom in a process called "charge exchange". This exchange or transfer of an electron to the solar wind ion is followed by its de-excitation into the ground state of the ion by the emission of X-rays and far ultraviolet photons.
Bow shock
Bow shocks form as a result of the interaction between the solar wind and the cometary ionosphere, which is created by the ionization of gases in the coma. As the comet approaches the Sun, increasing outgassing rates cause the coma to expand, and the sunlight ionizes gases in the coma. When the solar wind passes through this ion coma, the bow shock appears.
The first observations were made in the 1980s and 1990s as several spacecraft flew by comets 21P/Giacobini–Zinner, 1P/Halley, and 26P/Grigg–Skjellerup. It was then found that the bow shocks at comets are wider and more gradual than the sharp planetary bow shocks seen at, for example, Earth. These observations were all made near perihelion when the bow shocks already were fully developed.
The Rosetta spacecraft observed the bow shock at comet 67P/Churyumov–Gerasimenko at an early stage of bow shock development when the outgassing increased during the comet's journey toward the Sun. This young bow shock was called the "infant bow shock". The infant bow shock is asymmetric and, relative to the distance to the nucleus, wider than fully developed bow shocks.
Tails
In the outer Solar System, comets remain frozen and inactive and are extremely difficult or impossible to detect from Earth due to their small size. Statistical detections of inactive comet nuclei in the Kuiper belt have been reported from observations by the Hubble Space Telescope but these detections have been questioned. As a comet approaches the inner Solar System, solar radiation causes the volatile materials within the comet to vaporize and stream out of the nucleus, carrying dust away with them.
The streams of dust and gas each form their own distinct tail, pointing in slightly different directions. The tail of dust is left behind in the comet's orbit in such a manner that it often forms a curved tail called the type II or dust tail. At the same time, the ion or type I tail, made of gases, always points directly away from the Sun because this gas is more strongly affected by the solar wind than is dust, following magnetic field lines rather than an orbital trajectory. On occasions—such as when Earth passes through a comet's orbital plane, the antitail, pointing in the opposite direction to the ion and dust tails, may be seen.
The observation of antitails contributed significantly to the discovery of solar wind. The ion tail is formed as a result of the ionization by solar ultra-violet radiation of particles in the coma. Once the particles have been ionized, they attain a net positive electrical charge, which in turn gives rise to an "induced magnetosphere" around the comet. The comet and its induced magnetic field form an obstacle to outward flowing solar wind particles. Because the relative orbital speed of the comet and the solar wind is supersonic, a bow shock is formed upstream of the comet in the flow direction of the solar wind. In this bow shock, large concentrations of cometary ions (called "pick-up ions") congregate and act to "load" the solar magnetic field with plasma, such that the field lines "drape" around the comet forming the ion tail.
If the ion tail loading is sufficient, the magnetic field lines are squeezed together to the point where, at some distance along the ion tail, magnetic reconnection occurs. This leads to a "tail disconnection event". This has been observed on a number of occasions, one notable event being recorded on 20 April 2007, when the ion tail of Encke's Comet was completely severed while the comet passed through a coronal mass ejection. This event was observed by the STEREO space probe.
In 2013, ESA scientists reported that the ionosphere of the planet Venus streams outwards in a manner similar to the ion tail seen streaming from a comet under similar conditions."
Jets
Uneven heating can cause newly generated gases to break out of a weak spot on the surface of comet's nucleus, like a geyser. These streams of gas and dust can cause the nucleus to spin, and even split apart. In 2010 it was revealed dry ice (frozen carbon dioxide) can power jets of material flowing out of a comet nucleus. Infrared imaging of Hartley 2 shows such jets exiting and carrying with it dust grains into the coma.
Orbital characteristics
Most comets are small Solar System bodies with elongated elliptical orbits that take them close to the Sun for a part of their orbit and then out into the further reaches of the Solar System for the remainder. Comets are often classified according to the length of their orbital periods: The longer the period the more elongated the ellipse.
Short period
Periodic comets or short-period comets are generally defined as those having orbital periods of less than 200 years. They usually orbit more-or-less in the ecliptic plane in the same direction as the planets. Their orbits typically take them out to the region of the outer planets (Jupiter and beyond) at aphelion; for example, the aphelion of Halley's Comet is a little beyond the orbit of Neptune. Comets whose aphelia are near a major planet's orbit are called its "family". Such families are thought to arise from the planet capturing formerly long-period comets into shorter orbits.
At the shorter orbital period extreme, Encke's Comet has an orbit that does not reach the orbit of Jupiter, and is known as an Encke-type comet. Short-period comets with orbital periods less than 20 years and low inclinations (up to 30 degrees) to the ecliptic are called traditional Jupiter-family comets (JFCs). Those like Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets (HTCs). , 70 Encke-type comets, 100 HTCs, and 755 JFCs have been reported.
Recently discovered main-belt comets form a distinct class, orbiting in more circular orbits within the asteroid belt.
Because their elliptical orbits frequently take them close to the giant planets, comets are subject to further gravitational perturbations. Short-period comets have a tendency for their aphelia to coincide with a giant planet's semi-major axis, with the JFCs being the largest group. It is clear that comets coming in from the Oort cloud often have their orbits strongly influenced by the gravity of giant planets as a result of a close encounter. Jupiter is the source of the greatest perturbations, being more than twice as massive as all the other planets combined. These perturbations can deflect long-period comets into shorter orbital periods.
Based on their orbital characteristics, short-period comets are thought to originate from the centaurs and the Kuiper belt/scattered disc —a disk of objects in the trans-Neptunian region—whereas the source of long-period comets is thought to be the far more distant spherical Oort cloud (after the Dutch astronomer Jan Hendrik Oort who hypothesized its existence). Vast swarms of comet-like bodies are thought to orbit the Sun in these distant regions in roughly circular orbits. Occasionally the gravitational influence of the outer planets (in the case of Kuiper belt objects) or nearby stars (in the case of Oort cloud objects) may throw one of these bodies into an elliptical orbit that takes it inwards toward the Sun to form a visible comet. Unlike the return of periodic comets, whose orbits have been established by previous observations, the appearance of new comets by this mechanism is unpredictable. When flung into the orbit of the sun, and being continuously dragged towards it, tons of matter are stripped from the comets which greatly influence their lifetime; the more stripped, the shorter they live and vice versa.
Long period
Long-period comets have highly eccentric orbits and periods ranging from 200 years to thousands or even millions of years. An eccentricity greater than 1 when near perihelion does not necessarily mean that a comet will leave the Solar System. For example, Comet McNaught had a heliocentric osculating eccentricity of 1.000019 near its perihelion passage epoch in January 2007 but is bound to the Sun with roughly a 92,600-year orbit because the eccentricity drops below 1 as it moves farther from the Sun. The future orbit of a long-period comet is properly obtained when the osculating orbit is computed at an epoch after leaving the planetary region and is calculated with respect to the center of mass of the Solar System. By definition long-period comets remain gravitationally bound to the Sun; those comets that are ejected from the Solar System due to close passes by major planets are no longer properly considered as having "periods". The orbits of long-period comets take them far beyond the outer planets at aphelia, and the plane of their orbits need not lie near the ecliptic. Long-period comets such as C/1999 F1 and C/2017 T2 (PANSTARRS) can have aphelion distances of nearly with orbital periods estimated around 6 million years.
Single-apparition or non-periodic comets are similar to long-period comets because they have parabolic or slightly hyperbolic trajectories when near perihelion in the inner Solar System. However, gravitational perturbations from giant planets cause their orbits to change. Single-apparition comets have a hyperbolic or parabolic osculating orbit which allows them to permanently exit the Solar System after a single pass of the Sun. The Sun's Hill sphere has an unstable maximum boundary of . Only a few hundred comets have been seen to reach a hyperbolic orbit (e > 1) when near perihelion that using a heliocentric unperturbed two-body best-fit suggests they may escape the Solar System.
, only two objects have been discovered with an eccentricity significantly greater than one: 1I/ʻOumuamua and 2I/Borisov, indicating an origin outside the Solar System. While ʻOumuamua, with an eccentricity of about 1.2, showed no optical signs of cometary activity during its passage through the inner Solar System in October 2017, changes to its trajectory—which suggests outgassing—indicate that it is probably a comet. On the other hand, 2I/Borisov, with an estimated eccentricity of about 3.36, has been observed to have the coma feature of comets, and is considered the first detected interstellar comet. Comet C/1980 E1 had an orbital period of roughly 7.1 million years before the 1982 perihelion passage, but a 1980 encounter with Jupiter accelerated the comet giving it the largest eccentricity (1.057) of any known solar comet with a reasonable observation arc. Comets not expected to return to the inner Solar System include C/1980 E1, C/2000 U5, C/2001 Q4 (NEAT), C/2009 R1, C/1956 R1, and C/2007 F1 (LONEOS).
Some authorities use the term "periodic comet" to refer to any comet with a periodic orbit (that is, all short-period comets plus all long-period comets), whereas others use it to mean exclusively short-period comets. Similarly, although the literal meaning of "non-periodic comet" is the same as "single-apparition comet", some use it to mean all comets that are not "periodic" in the second sense (that is, to include all comets with a period greater than 200 years).
Early observations have revealed a few genuinely hyperbolic (i.e. non-periodic) trajectories, but no more than could be accounted for by perturbations from Jupiter. Comets from interstellar space are moving with velocities of the same order as the relative velocities of stars near the Sun (a few tens of km per second). When such objects enter the Solar System, they have a positive specific orbital energy resulting in a positive velocity at infinity () and have notably hyperbolic trajectories. A rough calculation shows that there might be four hyperbolic comets per century within Jupiter's orbit, give or take one and perhaps two orders of magnitude.
Oort cloud and Hills cloud
The Oort cloud is thought to occupy a vast space starting from between to as far as from the Sun. This cloud encases the celestial bodies that start at the middle of the Solar System—the Sun, all the way to outer limits of the Kuiper Belt. The Oort cloud consists of viable materials necessary for the creation of celestial bodies. The Solar System's planets exist only because of the planetesimals (chunks of leftover space that assisted in the creation of planets) that were condensed and formed by the gravity of the Sun. The eccentric made from these trapped planetesimals is why the Oort Cloud even exists. Some estimates place the outer edge at between . The region can be subdivided into a spherical outer Oort cloud of , and a doughnut-shaped inner cloud, the Hills cloud, of . The outer cloud is only weakly bound to the Sun and supplies the long-period (and possibly Halley-type) comets that fall to inside the orbit of Neptune. The inner Oort cloud is also known as the Hills cloud, named after Jack G. Hills, who proposed its existence in 1981. Models predict that the inner cloud should have tens or hundreds of times as many cometary nuclei as the outer halo; it is seen as a possible source of new comets that resupply the relatively tenuous outer cloud as the latter's numbers are gradually depleted. The Hills cloud explains the continued existence of the Oort cloud after billions of years.
Exocomets
Exocomets beyond the Solar System have been detected and may be common in the Milky Way. The first exocomet system detected was around Beta Pictoris, a very young A-type main-sequence star, in 1987. A total of 11 such exocomet systems have been identified , using the absorption spectrum caused by the large clouds of gas emitted by comets when passing close to their star. For ten years the Kepler space telescope was responsible for searching for planets and other forms outside of the solar system. The first transiting exocomets were found in February 2018 by a group consisting of professional astronomers and citizen scientists in light curves recorded by the Kepler Space Telescope. After Kepler Space Telescope retired in October 2018, a new telescope called TESS Telescope has taken over Kepler's mission. Since the launch of TESS, astronomers have discovered the transits of comets around the star Beta Pictoris using a light curve from TESS. Since TESS has taken over, astronomers have since been able to better distinguish exocomets with the spectroscopic method. New planets are detected by the white light curve method which is viewed as a symmetrical dip in the charts readings when a planet overshadows its parent star. However, after further evaluation of these light curves, it has been discovered that the asymmetrical patterns of the dips presented are caused by the tail of a comet or of hundreds of comets.
Effects of comets
Connection to meteor showers
As a comet is heated during close passes to the Sun, outgassing of its icy components releases solid debris too large to be swept away by radiation pressure and the solar wind. If Earth's orbit sends it through that trail of debris, which is composed mostly of fine grains of rocky material, there is likely to be a meteor shower as Earth passes through. Denser trails of debris produce quick but intense meteor showers and less dense trails create longer but less intense showers. Typically, the density of the debris trail is related to how long ago the parent comet released the material. The Perseid meteor shower, for example, occurs every year between 9 and 13 August, when Earth passes through the orbit of Comet Swift–Tuttle. Halley's Comet is the source of the Orionid shower in October.
Comets and impact on life
Many comets and asteroids collided with Earth in its early stages. Many scientists think that comets bombarding the young Earth about 4 billion years ago brought the vast quantities of water that now fill Earth's oceans, or at least a significant portion of it. Others have cast doubt on this idea. The detection of organic molecules, including polycyclic aromatic hydrocarbons, in significant quantities in comets has led to speculation that comets or meteorites may have brought the precursors of life—or even life itself—to Earth. In 2013 it was suggested that impacts between rocky and icy surfaces, such as comets, had the potential to create the amino acids that make up proteins through shock synthesis. The speed at which the comets entered the atmosphere, combined with the magnitude of energy created after initial contact, allowed smaller molecules to condense into the larger macro-molecules that served as the foundation for life. In 2015, scientists found significant amounts of molecular oxygen in the outgassings of comet 67P, suggesting that the molecule may occur more often than had been thought, and thus less an indicator of life as has been supposed.
It is suspected that comet impacts have, over long timescales, delivered significant quantities of water to Earth's Moon, some of which may have survived as lunar ice. Comet and meteoroid impacts are thought to be responsible for the existence of tektites and australites.
Fear of comets
Fear of comets as acts of God and signs of impending doom was highest in Europe from AD 1200 to 1650. The year after the Great Comet of 1618, for example, Gotthard Arthusius published a pamphlet stating that it was a sign that the Day of Judgment was near. He listed ten pages of comet-related disasters, including "earthquakes, floods, changes in river courses, hail storms, hot and dry weather, poor harvests, epidemics, war and treason and high prices".
By 1700 most scholars concluded that such events occurred whether a comet was seen or not. Using Edmond Halley's records of comet sightings, however, William Whiston in 1711 wrote that the Great Comet of 1680 had a periodicity of 574 years and was responsible for the worldwide flood in the Book of Genesis, by pouring water on Earth. His announcement revived for another century fear of comets, now as direct threats to the world instead of signs of disasters. Spectroscopic analysis in 1910 found the toxic gas cyanogen in the tail of Halley's Comet, causing panicked buying of gas masks and quack "anti-comet pills" and "anti-comet umbrellas" by the public.
Fate of comets
Departure (ejection) from Solar System
If a comet is traveling fast enough, it may leave the Solar System. Such comets follow the open path of a hyperbola, and as such, they are called hyperbolic comets. Solar comets are only known to be ejected by interacting with another object in the Solar System, such as Jupiter. An example of this is Comet C/1980 E1, which was shifted from an orbit of 7.1 million years around the Sun, to a hyperbolic trajectory, after a 1980 close pass by the planet Jupiter. Interstellar comets such as 1I/ʻOumuamua and 2I/Borisov never orbited the Sun and therefore do not require a 3rd-body interaction to be ejected from the Solar System.
Extinction
Jupiter-family comets and long-period comets appear to follow very different fading laws. The JFCs are active over a lifetime of about 10,000 years or ~1,000 orbits whereas long-period comets fade much faster. Only 10% of the long-period comets survive more than 50 passages to small perihelion and only 1% of them survive more than 2,000 passages. Eventually most of the volatile material contained in a comet nucleus evaporates, and the comet becomes a small, dark, inert lump of rock or rubble that can resemble an asteroid. Some asteroids in elliptical orbits are now identified as extinct comets. Roughly six percent of the near-Earth asteroids are thought to be extinct comet nuclei.
Breakup and collisions
The nucleus of some comets may be fragile, a conclusion supported by the observation of comets splitting apart. A significant cometary disruption was that of Comet Shoemaker–Levy 9, which was discovered in 1993. A close encounter in July 1992 had broken it into pieces, and over a period of six days in July 1994, these pieces fell into Jupiter's atmosphere—the first time astronomers had observed a collision between two objects in the Solar System. Other splitting comets include 3D/Biela in 1846 and 73P/Schwassmann–Wachmann from 1995 to 2006. Greek historian Ephorus reported that a comet split apart as far back as the winter of 372–373 BC. Comets are suspected of splitting due to thermal stress, internal gas pressure, or impact.
Comets 42P/Neujmin and 53P/Van Biesbroeck appear to be fragments of a parent comet. Numerical integrations have shown that both comets had a rather close approach to Jupiter in January 1850, and that, before 1850, the two orbits were nearly identical. Another group of comets that is the result of fragmentation episodes is the Liller comet family made of C/1988 A1 (Liller), C/1996 Q1 (Tabur), C/2015 F3 (SWAN), C/2019 Y1 (ATLAS), and C/2023 V5 (Leonard).
Some comets have been observed to break up during their perihelion passage, including great comets West and Ikeya–Seki. Biela's Comet was one significant example when it broke into two pieces during its passage through the perihelion in 1846. These two comets were seen separately in 1852, but never again afterward. Instead, spectacular meteor showers were seen in 1872 and 1885 when the comet should have been visible. A minor meteor shower, the Andromedids, occurs annually in November, and it is caused when Earth crosses the orbit of Biela's Comet.
Some comets meet a more spectacular end – either falling into the Sun or colliding with a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet Shoemaker–Levy 9 broke up into pieces and collided with Jupiter.
Nomenclature
The names given to comets have followed several different conventions over the past two centuries. Prior to the early 20th century, most comets were referred to by the year when they appeared, sometimes with additional adjectives for particularly bright comets; thus, the "Great Comet of 1680", the "Great Comet of 1882", and the "Great January Comet of 1910".
After Edmond Halley demonstrated that the comets of 1531, 1607, and 1682 were the same body and successfully predicted its return in 1759 by calculating its orbit, that comet became known as Halley's Comet. Similarly, the second and third known periodic comets, Encke's Comet and Biela's Comet, were named after the astronomers who calculated their orbits rather than their original discoverers. Later, periodic comets were usually named after their discoverers, but comets that had appeared only once continued to be referred to by the year of their appearance.
In the early 20th century, the convention of naming comets after their discoverers became common, and this remains so today. A comet can be named after its discoverers or an instrument or program that helped to find it. For example, in 2019, astronomer Gennadiy Borisov observed a comet that appeared to have originated outside of the solar system; the comet was named 2I/Borisov after him.
History of study
Early observations and thought
From ancient sources, such as Chinese oracle bones, it is known that comets have been noticed by humans for millennia. Until the sixteenth century, comets were usually considered bad omens of deaths of kings or noble men, or coming catastrophes, or even interpreted as attacks by heavenly beings against terrestrial inhabitants.
Aristotle (384–322 BC) was the first known scientist to use various theories and observational facts to employ a consistent, structured cosmological theory of comets. He believed that comets were atmospheric phenomena, due to the fact that they could appear outside of the zodiac and vary in brightness over the course of a few days. Aristotle's cometary theory arose from his observations and cosmological theory that everything in the cosmos is arranged in a distinct configuration. Part of this configuration was a clear separation between the celestial and terrestrial, believing comets to be strictly associated with the latter. According to Aristotle, comets must be within the sphere of the moon and clearly separated from the heavens. Also in the 4th century BC, Apollonius of Myndus supported the idea that comets moved like the planets. Aristotelian theory on comets continued to be widely accepted throughout the Middle Ages, despite several discoveries from various individuals challenging aspects of it.
In the 1st century AD, Seneca the Younger questioned Aristotle's logic concerning comets. Because of their regular movement and imperviousness to wind, they cannot be atmospheric, and are more permanent than suggested by their brief flashes across the sky. He pointed out that only the tails are transparent and thus cloudlike, and argued that there is no reason to confine their orbits to the zodiac. In criticizing Apollonius of Myndus, Seneca argues, "A comet cuts through the upper regions of the universe and then finally becomes visible when it reaches the lowest point of its orbit." While Seneca did not author a substantial theory of his own, his arguments would spark much debate among Aristotle's critics in the 16th and 17th centuries.
In the 1st century AD, Pliny the Elder believed that comets were connected with political unrest and death. Pliny observed comets as "human like", often describing their tails with "long hair" or "long beard". His system for classifying comets according to their color and shape was used for centuries.
In India, by the 6th century AD astronomers believed that comets were apparitions that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varāhamihira and Bhadrabahu, and the 10th-century astronomer Bhaṭṭotpala listed the names and estimated periods of certain comets, but it is not known how these figures were calculated or how accurate they were.
There is a claim that an Arab scholar in 1258 noted several recurrent appearances of a comet (or a type of comet), and though it's not clear if he considered it to be a single periodic comet, it might have been a comet with a period of around 63 years.
In 1301, the Italian painter Giotto was the first person to accurately and anatomically portray a comet. In his work Adoration of the Magi, Giotto's depiction of Halley's Comet in the place of the Star of Bethlehem would go unmatched in accuracy until the 19th century and be bested only with the invention of photography.
Astrological interpretations of comets proceeded to take precedence clear into the 15th century, despite the presence of modern scientific astronomy beginning to take root. Comets continued to forewarn of disaster, as seen in the Luzerner Schilling chronicles and in the warnings of Pope Callixtus III. In 1578, German Lutheran bishop Andreas Celichius defined comets as "the thick smoke of human sins ... kindled by the hot and fiery anger of the Supreme Heavenly Judge". The next year, Andreas Dudith stated that "If comets were caused by the sins of mortals, they would never be absent from the sky."
Scientific approach
Crude attempts at a parallax measurement of Halley's Comet were made in 1456, but were erroneous. Regiomontanus was the first to attempt to calculate diurnal parallax by observing the Great Comet of 1472. His predictions were not very accurate, but they were conducted in the hopes of estimating the distance of a comet from Earth.
In the 16th century, Tycho Brahe and Michael Maestlin demonstrated that comets must exist outside of Earth's atmosphere by measuring the parallax of the Great Comet of 1577. Within the precision of the measurements, this implied the comet must be at least four times more distant than from Earth to the Moon. Based on observations in 1664, Giovanni Borelli recorded the longitudes and latitudes of comets that he observed, and suggested that cometary orbits may be parabolic. Despite being a skilled astronomer, in his 1623 book The Assayer, Galileo Galilei rejected Brahe's theories on the parallax of comets and claimed that they may be a mere optical illusion, despite little personal observation. In 1625, Maestlin's student Johannes Kepler upheld that Brahe's view of cometary parallax was correct. Additionally, mathematician Jacob Bernoulli published a treatise on comets in 1682.
During the early modern period comets were studied for their astrological significance in medical disciplines. Many healers of this time considered medicine and astronomy to be inter-disciplinary and employed their knowledge of comets and other astrological signs for diagnosing and treating patients.
Isaac Newton, in his Principia Mathematica of 1687, proved that an object moving under the influence of gravity by an inverse square law must trace out an orbit shaped like one of the conic sections, and he demonstrated how to fit a comet's path through the sky to a parabolic orbit, using the comet of 1680 as an example.
He describes comets as compact and durable solid bodies moving in oblique orbit and their tails as thin streams of vapor emitted by their nuclei, ignited or heated by the Sun. He suspected that comets were the origin of the life-supporting component of air. He pointed out that comets usually appear near the Sun, and therefore most likely orbit it. On their luminosity, he stated, "The comets shine by the Sun's light, which they reflect," with their tails illuminated by "the Sun's light reflected by a smoke arising from [the coma]".
In 1705, Edmond Halley (1656–1742) applied Newton's method to 23 cometary apparitions that had occurred between 1337 and 1698. He noted that three of these, the comets of 1531, 1607, and 1682, had very similar orbital elements, and he was further able to account for the slight differences in their orbits in terms of gravitational perturbation caused by Jupiter and Saturn. Confident that these three apparitions had been three appearances of the same comet, he predicted that it would appear again in 1758–59. Halley's predicted return date was later refined by a team of three French mathematicians: Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute, who predicted the date of the comet's 1759 perihelion to within one month's accuracy. When the comet returned as predicted, it became known as Halley's Comet.
As early as the 18th century, some scientists had made correct hypotheses as to comets' physical composition. In 1755, Immanuel Kant hypothesized in his Universal Natural History that comets were condensed from "primitive matter" beyond the known planets, which is "feebly moved" by gravity, then orbit at arbitrary inclinations, and are partially vaporized by the Sun's heat as they near perihelion. In 1836, the German mathematician Friedrich Wilhelm Bessel, after observing streams of vapor during the appearance of Halley's Comet in 1835, proposed that the jet forces of evaporating material could be great enough to significantly alter a comet's orbit, and he argued that the non-gravitational movements of Encke's Comet resulted from this phenomenon.
In the 19th century, the Astronomical Observatory of Padova was an epicenter in the observational study of comets. Led by Giovanni Santini (1787–1877) and followed by Giuseppe Lorenzoni (1843–1914), this observatory was devoted to classical astronomy, mainly to the new comets and planets orbit calculation, with the goal of compiling a catalog of almost ten thousand stars. Situated in the Northern portion of Italy, observations from this observatory were key in establishing important geodetic, geographic, and astronomical calculations, such as the difference of longitude between Milan and Padua as well as Padua to Fiume. Correspondence within the observatory, particularly between Santini and another astronomer Giuseppe Toaldo, mentioned the importance of comet and planetary orbital observations.
In 1950, Fred Lawrence Whipple proposed that rather than being rocky objects containing some ice, comets were icy objects containing some dust and rock. This "dirty snowball" model soon became accepted and appeared to be supported by the observations of an armada of spacecraft (including the European Space Agency's Giotto probe and the Soviet Union's Vega 1 and Vega 2) that flew through the coma of Halley's Comet in 1986, photographed the nucleus, and observed jets of evaporating material.
On 22 January 2014, ESA scientists reported the detection, for the first definitive time, of water vapor on the dwarf planet Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, , and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).
Spacecraft missions
The Halley Armada describes the collection of spacecraft missions that visited and/or made observations of Halley's Comet 1980s perihelion. The space shuttle Challenger was intended to do a study of Halley's Comet in 1986, but exploded shortly after being launched.
Deep Impact. Debate continues about how much ice is in a comet. In 2001, the Deep Space 1 spacecraft obtained high-resolution images of the surface of Comet Borrelly. It was found that the surface of comet Borrelly is hot and dry, with a temperature of between , and extremely dark, suggesting that the ice has been removed by solar heating and maturation, or is hidden by the soot-like material that covers Borrelly. In July 2005, the Deep Impact probe blasted a crater on Comet Tempel 1 to study its interior. The mission yielded results suggesting that the majority of a comet's water ice is below the surface and that these reservoirs feed the jets of vaporized water that form the coma of Tempel 1. Renamed EPOXI, it made a flyby of Comet Hartley 2 on 4 November 2010.
Ulysses. In 2007, the Ulysses probe unexpectedly passed through the tail of the comet C/2006 P1 (McNaught) which was discovered in 2006. Ulysses was launched in 1990 and the intended mission was for Ulysses to orbit around the Sun for further study at all latitudes.
Stardust. Data from the Stardust mission show that materials retrieved from the tail of Wild 2 were crystalline and could only have been "born in fire", at extremely high temperatures of over . Although comets formed in the outer Solar System, radial mixing of material during the early formation of the Solar System is thought to have redistributed material throughout the proto-planetary disk. As a result, comets contain crystalline grains that formed in the early, hot inner Solar System. This is seen in comet spectra as well as in sample return missions. More recent still, the materials retrieved demonstrate that the "comet dust resembles asteroid materials". These new results have forced scientists to rethink the nature of comets and their distinction from asteroids.
Rosetta. The Rosetta probe orbited Comet Churyumov–Gerasimenko. On 12 November 2014, its lander Philae successfully landed on the comet's surface, the first time a spacecraft has ever landed on such an object in history.
Classification
Great comets
Approximately once a decade, a comet becomes bright enough to be noticed by a casual observer, leading such comets to be designated as great comets. Predicting whether a comet will become a great comet is notoriously difficult, as many factors may cause a comet's brightness to depart drastically from predictions. Broadly speaking, if a comet has a large and active nucleus, will pass close to the Sun, and is not obscured by the Sun as seen from Earth when at its brightest, it has a chance of becoming a great comet. However, Comet Kohoutek in 1973 fulfilled all the criteria and was expected to become spectacular but failed to do so. Comet West, which appeared three years later, had much lower expectations but became an extremely impressive comet.
The Great Comet of 1577 is a well-known example of a great comet. It passed near Earth as a non-periodic comet and was seen by many, including well-known astronomers Tycho Brahe and Taqi ad-Din. Observations of this comet led to several significant findings regarding cometary science, especially for Brahe.
The late 20th century saw a lengthy gap without the appearance of any great comets, followed by the arrival of two in quick succession—Comet Hyakutake in 1996, followed by Hale–Bopp, which reached maximum brightness in 1997 having been discovered two years earlier. The first great comet of the 21st century was C/2006 P1 (McNaught), which became visible to naked eye observers in January 2007. It was the brightest in over 40 years.
Sungrazing comets
A sungrazing comet is a comet that passes extremely close to the Sun at perihelion, generally within a few million kilometers. Although small sungrazers can be completely evaporated during such a close approach to the Sun, larger sungrazers can survive many perihelion passages. However, the strong tidal forces they experience often lead to their fragmentation.
About 90% of the sungrazers observed with SOHO are members of the Kreutz group, which all originate from one giant comet that broke up into many smaller comets during its first passage through the inner Solar System. The remainder contains some sporadic sungrazers, but four other related groups of comets have been identified among them: the Kracht, Kracht 2a, Marsden, and Meyer groups. The Marsden and Kracht groups both appear to be related to Comet 96P/Machholz, which is the parent of two meteor streams, the Quadrantids and the Arietids.
Unusual comets
Of the thousands of known comets, some exhibit unusual properties. Comet Encke (2P/Encke) orbits from outside the asteroid belt to just inside the orbit of the planet Mercury whereas the Comet 29P/Schwassmann–Wachmann currently travels in a nearly circular orbit entirely between the orbits of Jupiter and Saturn. 2060 Chiron, whose unstable orbit is between Saturn and Uranus, was originally classified as an asteroid until a faint coma was noticed. Similarly, Comet Shoemaker–Levy 2 was originally designated asteroid .
Largest
The largest known periodic comet is 95P/Chiron at 200 km in diameter that comes to perihelion every 50 years just inside of Saturn's orbit at 8 AU. The largest known Oort cloud comet is suspected of being Comet Bernardinelli-Bernstein at ≈150 km that will not come to perihelion until January 2031 just outside of Saturn's orbit at 11 AU. The Comet of 1729 is estimated to have been ≈100 km in diameter and came to perihelion inside of Jupiter's orbit at 4 AU.
Centaurs
Centaurs typically behave with characteristics of both asteroids and comets. Centaurs can be classified as comets such as 60558 Echeclus, and 166P/NEAT. 166P/NEAT was discovered while it exhibited a coma, and so is classified as a comet despite its orbit, and 60558 Echeclus was discovered without a coma but later became active, and was then classified as both a comet and an asteroid (174P/Echeclus). One plan for Cassini involved sending it to a centaur, but NASA decided to destroy it instead.
Observation
A comet may be discovered photographically using a wide-field telescope or visually with binoculars. However, even without access to optical equipment, it is still possible for the amateur astronomer to discover a sungrazing comet online by downloading images accumulated by some satellite observatories such as SOHO. SOHO's 2000th comet was discovered by Polish amateur astronomer Michał Kusiak on 26 December 2010 and both discoverers of Hale–Bopp used amateur equipment (although Hale was not an amateur).
Lost
A number of periodic comets discovered in earlier decades or previous centuries are now lost comets. Their orbits were never known well enough to predict future appearances or the comets have disintegrated. However, occasionally a "new" comet is discovered, and calculation of its orbit shows it to be an old "lost" comet. An example is Comet 11P/Tempel–Swift–LINEAR, discovered in 1869 but unobservable after 1908 because of perturbations by Jupiter. It was not found again until accidentally rediscovered by LINEAR in 2001. There are at least 18 comets that fit this category.
In popular culture
The depiction of comets in popular culture is firmly rooted in the long Western tradition of seeing comets as harbingers of doom and as omens of world-altering change. Halley's Comet alone has caused a slew of sensationalist publications of all sorts at each of its reappearances. It was especially noted that the birth and death of some notable persons coincided with separate appearances of the comet, such as with writers Mark Twain (who correctly speculated that he'd "go out with the comet" in 1910) and Eudora Welty, to whose life Mary Chapin Carpenter dedicated the song "Halley Came to Jackson".
In times past, bright comets often inspired panic and hysteria in the general population, being thought of as bad omens. More recently, during the passage of Halley's Comet in 1910, Earth passed through the comet's tail, and erroneous newspaper reports inspired a fear that cyanogen in the tail might poison millions, whereas the appearance of Comet Hale–Bopp in 1997 triggered the mass suicide of the Heaven's Gate cult.
In science fiction, the impact of comets has been depicted as a threat overcome by technology and heroism (as in the 1998 films Deep Impact and Armageddon), or as a trigger of global apocalypse (Lucifer's Hammer, 1979) or zombies (Night of the Comet, 1984). In Jules Verne's Off on a Comet a group of people are stranded on a comet orbiting the Sun, while a large crewed space expedition visits Halley's Comet in Sir Arthur C. Clarke's novel 2061: Odyssey Three.
In literature
The long-period comet first recorded by Pons in Florence on 15 July 1825 inspired Lydia Sigourney's humorous poem in which all the celestial bodies argue over the comet's appearance and purpose.
Gallery
Videos
| Physical sciences | Astronomy | null |
5966 | https://en.wikipedia.org/wiki/Compost | Compost | Compost is a mixture of ingredients used as plant fertilizer and to improve soil's physical, chemical, and biological properties. It is commonly prepared by decomposing plant and food waste, recycling organic materials, and manure. The resulting mixture is rich in plant nutrients and beneficial organisms, such as bacteria, protozoa, nematodes, and fungi. Compost improves soil fertility in gardens, landscaping, horticulture, urban agriculture, and organic farming, reducing dependency on commercial chemical fertilizers. The benefits of compost include providing nutrients to crops as fertilizer, acting as a soil conditioner, increasing the humus or humic acid contents of the soil, and introducing beneficial microbes that help to suppress pathogens in the soil and reduce soil-borne diseases.
At the simplest level, composting requires gathering a mix of green waste (nitrogen-rich materials such as leaves, grass, and food scraps) and brown waste (woody materials rich in carbon, such as stalks, paper, and wood chips). The materials break down into humus in a process taking months. Composting can be a multistep, closely monitored process with measured inputs of water, air, and carbon- and nitrogen-rich materials. The decomposition process is aided by shredding the plant matter, adding water, and ensuring proper aeration by regularly turning the mixture in a process using open piles or windrows. Fungi, earthworms, and other detritivores further break up the organic material. Aerobic bacteria and fungi manage the chemical process by converting the inputs into heat, carbon dioxide, and ammonium ions.
Composting is an important part of waste management, since food and other compostable materials make up about 20% of waste in landfills, and due to anaerobic conditions, these materials take longer to biodegrade in the landfill. Composting offers an environmentally superior alternative to using organic material for landfill because composting reduces methane emissions due to anaerobic conditions, and provides economic and environmental co-benefits. For example, compost can also be used for land and stream reclamation, wetland construction, and landfill cover.
Fundamentals
Composting is an aerobic method of decomposing organic solid wastes, so it can be used to recycle organic material. The process involves decomposing organic material into a humus-like material, known as compost, which is a good fertilizer for plants.
Composting organisms require four equally important ingredients to work effectively:
Carbon is needed for energy; the microbial oxidation of carbon produces the heat required for other parts of the composting process. High carbon materials tend to be brown and dry.
Nitrogen is needed to grow and reproduce more organisms to oxidize the carbon. High nitrogen materials tend to be green and wet. They can also include colourful fruits and vegetables.
Oxygen is required for oxidizing the carbon, the decomposition process. Aerobic bacteria need oxygen levels above 5% to perform the processes needed for composting.
Water is necessary in the right amounts to maintain activity without causing locally anaerobic conditions.
Certain ratios of these materials allow microorganisms to work at a rate that will heat up the compost pile. Active management of the pile (e.g., turning over the compost heap) is needed to maintain sufficient oxygen and the right moisture level. The air/water balance is critical to maintaining high temperatures until the materials are broken down.
Composting is most efficient with a carbon-to-nitrogen ratio of about 25:1. Hot composting focuses on retaining heat to increase the decomposition rate, thus producing compost more quickly. Rapid composting is favored by having a carbon-to-nitrogen ratio of about 30 carbon units or less. Above 30, the substrate is nitrogen starved. Below 15, it is likely to outgas a portion of nitrogen as ammonia.
Nearly all dead plant and animal materials have both carbon and nitrogen in different amounts. Fresh grass clippings have an average ratio of about 15:1 and dry autumn leaves about 50:1 depending upon species. Composting is an ongoing and dynamic process; adding new sources of carbon and nitrogen consistently, as well as active management, is important.
Organisms
Organisms can break down organic matter in compost if provided with the correct mixture of water, oxygen, carbon, and nitrogen. They fall into two broad categories: chemical decomposers, which perform chemical processes on the organic waste, and physical decomposers, which process the waste into smaller pieces through methods such as grinding, tearing, chewing, and digesting.
Chemical decomposers
Bacteria are the most abundant and important of all the microorganisms found in compost. Bacteria process carbon and nitrogen and excrete plant-available nutrients such as nitrogen, phosphorus, and magnesium. Depending on the phase of composting, mesophilic or thermophilic bacteria may be the most prominent.
Mesophilic bacteria get compost to the thermophilic stage through oxidation of organic material. Afterwards they cure it, which makes the fresh compost more bioavailable for plants.
Thermophilic bacteria do not reproduce and are not active between , yet are found throughout soil. They activate once the mesophilic bacteria have begun to break down organic matter and increase the temperature to their optimal range. They have been shown to enter soils via rainwater. They are present so broadly because of many factors, including their spores being resilient. Thermophilic bacteria thrive at higher temperatures, reaching in typical mixes. Large-scale composting operations, such as windrow composting, may exceed this temperature, potentially killing beneficial soil microorganisms but also pasteurizing the waste.
Actinomycetota are needed to break down paper products such as newspaper, bark, etc., and other large molecules such as lignin and cellulose that are more difficult to decompose. The "pleasant, earthy smell of compost" is attributed to Actinomycetota. They make carbon, ammonia, and nitrogen nutrients available to plants.
Fungi such as molds and yeasts help break down materials that bacteria cannot, especially cellulose and lignin in woody material.
Protozoa contribute to biodegradation of organic matter and consume inactive bacteria, fungi, and micro-organic particulates.
Physical decomposers
Ants create nests, making the soil more porous and transporting nutrients to different areas of the compost.
Beetles as grubs feed on decaying vegetables.
Earthworms ingest partly composted material and excrete worm castings, making nitrogen, calcium, phosphorus, and magnesium available to plants. The tunnels they create as they move through the compost also increase aeration and drainage.
Flies feed on almost all organic material and put bacteria into the compost. Their population is kept in check by mites and the thermophilic temperatures that are unsuitable for fly larvae.
Millipedes break down plant material.
Rotifers feed on plant particles.
Snails and slugs feed on living or fresh plant material. They should be removed from compost before use, as they can damage plants and crops.
Sow bugs feed on rotting wood and decaying vegetation.
Springtails feed on fungi, molds, and decomposing plants.
Phases of composting
Under ideal conditions, composting proceeds through three major phases:
Mesophilic phase: The initial, mesophilic phase is when the decomposition is carried out under moderate temperatures by mesophilic microorganisms.
Thermophilic phase: As the temperature rises, a second, thermophilic phase starts, in which various thermophilic bacteria carry out the decomposition under higher temperatures (.)
Maturation phase: As the supply of high-energy compounds dwindles, the temperature starts to decrease, and the mesophilic bacteria once again predominate in the maturation phase.
Hot and cold composting – impact on timing
The time required to compost material relates to the volume of material, the particle size of the inputs (e.g. wood chips break down faster than branches), and the amount of mixing and aeration. Generally, larger piles reach higher temperatures and remain in a thermophilic stage for days or weeks. This is hot composting and is the usual method for large-scale municipal facilities and agricultural operations.
The Berkeley method produces finished compost in 18 days. It requires assembly of at least of material at the outset and needs turning every two days after an initial four-day phase. Such short processes involve some changes to traditional methods, including smaller, more homogenized particle sizes in the input materials, controlling carbon-to-nitrogen ratio (C:N) at 30:1 or less, and careful monitoring of the moisture level.
Cold composting is a slower process that can take up to a year to complete. It results from smaller piles, including many residential compost piles that receive small amounts of kitchen and garden waste over extended periods. Piles smaller than tend not to reach and maintain high temperatures. Turning is not necessary with cold composting, although a risk exists that parts of the pile may go anaerobic as it becomes compacted or waterlogged.
Pathogen removal
Composting can destroy some pathogens and seeds, by reaching temperatures above .
Dealing with stabilized compost – i.e. composted material in which microorganisms have finished digesting the organic matter and the temperature has reached between – poses very little risk, as these temperatures kill pathogens and even make oocysts unviable. The temperature at which a pathogen dies depends on the pathogen, how long the temperature is maintained (seconds to weeks), and pH.
Compost products such as compost tea and compost extracts have been found to have an inhibitory effect on Fusarium oxysporum, Rhizoctonia species, and Pythium debaryanum, plant pathogens that can cause crop diseases. Aerated compost teas are more effective than compost extracts. The microbiota and enzymes present in compost extracts also have a suppressive effect on fungal plant pathogens. Compost is a good source of biocontrol agents like B. subtilis, B. licheniformis, and P. chrysogenum that fight plant pathogens. Sterilizing the compost, compost tea, or compost extracts reduces the effect of pathogen suppression.
Diseases that can be contracted from handling compost
When turning compost that has not gone through phases where temperatures above are reached, a mouth mask and gloves must be worn to protect from diseases that can be contracted from handling compost, including:
Aspergillosis
Farmer's lung
Histoplasmosis – a fungus that grows in guano and bird droppings
Legionnaires' disease
Paronychia – via infection around the fingernails and toenails
Tetanus – a central nervous system disease
Oocytes are rendered unviable by temperatures over .
Environmental benefits
Compost adds organic matter to the soil and increases the nutrient content and biodiversity of microbes in soil. Composting at home reduces the amount of green waste being hauled to dumps or composting facilities. The reduced volume of materials being picked up by trucks results in fewer trips, which in turn lowers the overall emissions from the waste-management fleet.
Materials that can be composted
Potential sources of compostable materials, or feedstocks, include residential, agricultural, and commercial waste streams. Residential food or yard waste can be composted at home, or collected for inclusion in a large-scale municipal composting facility. In some regions, it could also be included in a local or neighborhood composting project.
Organic solid waste
The two broad categories of organic solid waste are green and brown. Green waste is generally considered a source of nitrogen and includes pre- and post-consumer food waste, grass clippings, garden trimmings, and fresh leaves. Animal carcasses, roadkill, and butcher residue can also be composted, and these are considered nitrogen sources.
Brown waste is a carbon source. Typical examples are dried vegetation and woody material such as fallen leaves, straw, woodchips, limbs, logs, pine needles, sawdust, and wood ash, but not charcoal ash. Products derived from wood such as paper and plain cardboard are also considered carbon sources.
Animal manure and bedding
On many farms, the basic composting ingredients are animal manure generated on the farm as a nitrogen source, and bedding as the carbon source. Straw and sawdust are common bedding materials. Nontraditional bedding materials are also used, including newspaper and chopped cardboard. The amount of manure composted on a livestock farm is often determined by cleaning schedules, land availability, and weather conditions. Each type of manure has its own physical, chemical, and biological characteristics. Cattle and horse manures, when mixed with bedding, possess good qualities for composting. Swine manure, which is very wet and usually not mixed with bedding material, must be mixed with straw or similar raw materials. Poultry manure must be blended with high-carbon, low-nitrogen materials.
Human excreta
Human excreta, sometimes called "humanure" in the composting context, can be added as an input to the composting process since it is a nutrient-rich organic material. Nitrogen, which serves as a building block for important plant amino acids, is found in solid human waste. Phosphorus, which helps plants convert sunlight into energy in the form of ATP, can be found in liquid human waste.
Solid human waste can be collected directly in composting toilets, or indirectly in the form of sewage sludge after it has undergone treatment in a sewage treatment plant. Both processes require capable design, as potential health risks need to be managed. In the case of home composting, a wide range of microorganisms, including bacteria, viruses, and parasitic worms, can be present in feces, and improper processing can pose significant health risks. In the case of large sewage treatment facilities that collect wastewater from a range of residential, commercial and industrial sources, there are additional considerations. The composted sewage sludge, referred to as biosolids, can be contaminated with a variety of metals and pharmaceutical compounds. Insufficient processing of biosolids can also lead to problems when the material is applied to land.
Urine can be put on compost piles or directly used as fertilizer. Adding urine to compost can increase temperatures, so can increase its ability to destroy pathogens and unwanted seeds. Unlike feces, urine does not attract disease-spreading flies (such as houseflies or blowflies), and it does not contain the most hardy of pathogens, such as parasitic worm eggs.
Animal remains
Animal carcasses may be composted as a disposal option. Such material is rich in nitrogen.
Human bodies
Composting technologies
Industrial-scale composting
In-vessel composting
Aerated static-pile composting
Windrow composting
Other systems at household level
Hügelkultur (raised garden beds or mounds)
The practice of making raised garden beds or mounds filled with rotting wood is also called in German. It is in effect creating a nurse log that is covered with soil.
Benefits of Hügelkultur garden beds include water retention and warming of soil. Buried wood acts like a sponge as it decomposes, able to capture water and store it for later use by crops planted on top of the bed.
Composting toilets
Related technologies
Vermicompost (also called worm castings, worm humus, worm manure, or worm faeces) is the end product of the breakdown of organic matter by earthworms. These castings have been shown to contain reduced levels of contaminants and a higher saturation of nutrients than the organic materials before vermicomposting.
Black soldier fly (Hermetia illucens) larvae are able to rapidly consume large amounts of organic material and can be used to treat human waste. The resulting compost still contains nutrients and can be used for biogas production, or further traditional composting or vermicomposting
Bokashi is a fermentation process rather than a decomposition process, and so retains the feedstock's energy, nutrient and carbon contents. There must be sufficient carbohydrate for fermentation to complete and therefore the process is typically applied to food waste, including noncompostable items. Carbohydrate is transformed into lactic acid, which dissociates naturally to form lactate, a biological energy carrier. The preserved result is therefore readily consumed by soil microbes and from there by the entire soil food web, leading to a significant increase in soil organic carbon and turbation. The process completes in weeks and returns soil acidity to normal.
Co-composting is a technique that processes organic solid waste together with other input materials such as dewatered fecal sludge or sewage sludge.
Anaerobic digestion combined with mechanical sorting of mixed waste streams is increasingly being used in developed countries due to regulations controlling the amount of organic matter allowed in landfills. Treating biodegradable waste before it enters a landfill reduces global warming from fugitive methane; untreated waste breaks down anaerobically in a landfill, producing landfill gas that contains methane, a potent greenhouse gas. The methane produced in an anaerobic digester can be used as biogas.
Uses
Agriculture and gardening
On open ground for growing wheat, corn, soybeans, and similar crops, compost can be broadcast across the top of the soil using spreader trucks or spreaders pulled behind a tractor. It is expected that the spread layer is very thin (approximately ) and worked into the soil prior to planting. Application rates of or more are not unusual when trying to rebuild poor soils or control erosion. Due to the extremely high cost of compost per unit of nutrients in the United States, on-farm use is relatively rare since rates over 4 tons/acre may not be affordable. This results from an over-emphasis on "recycling organic matter" than on "sustainable nutrients." In countries such as Germany, where compost distribution and spreading are partially subsidized in the original waste fees, compost is used more frequently on open ground on the premise of nutrient "sustainability".
In plasticulture, strawberries, tomatoes, peppers, melons, and other fruits and vegetables are grown under plastic to control temperature, retain moisture and control weeds. Compost may be banded (applied in strips along rows) and worked into the soil prior to bedding and planting, be applied at the same time the beds are constructed and plastic laid down, or used as a top dressing.
Many crops are not seeded directly in the field but are started in seed trays in a greenhouse. When the seedlings reach a certain stage of growth, they are transplanted in the field. Compost may be part of the mix used to grow the seedlings, but is not normally used as the only planting substrate. The particular crop and the seeds' sensitivity to nutrients, salts, etc. dictates the ratio of the blend, and maturity is important to insure that oxygen deprivation will not occur or that no lingering phyto-toxins remain.
Compost can be added to soil, coir, or peat, as a tilth improver, supplying humus and nutrients. It provides a rich growing medium as absorbent material. This material contains moisture and soluble minerals, which provide support and nutrients. Although it is rarely used alone, plants can flourish from mixed soil that includes a mix of compost with other additives such as sand, grit, bark chips, vermiculite, perlite, or clay granules to produce loam. Compost can be tilled directly into the soil or growing medium to boost the level of organic matter and the overall fertility of the soil. Compost that is ready to be used as an additive is dark brown or even black with an earthy smell.
Generally, direct seeding into a compost is not recommended due to the speed with which it may dry, the possible presence of phytotoxins in immature compost that may inhibit germination, and the possible tie up of nitrogen by incompletely decomposed lignin. It is very common to see blends of 20–30% compost used for transplanting seedlings.
Compost can be used to increase plant immunity to diseases and pests.
Compost tea
Compost tea is made up of extracts of fermented water leached from composted materials. Composts can be either aerated or non-aerated depending on its fermentation process. Compost teas are generally produced from adding compost to water in a ratio of 1:4–1:10, occasionally stirring to release microbes.
There is debate about the benefits of aerating the mixture. Non-aerated compost tea is cheaper and less labor-intensive, but there are conflicting studies regarding the risks of phytotoxicity and human pathogen regrowth. Aerated compost tea brews faster and generates more microbes, but has potential for human pathogen regrowth, particularly when one adds additional nutrients to the mixture.
Field studies have shown the benefits of adding compost teas to crops due to organic matter input, increased nutrient availability, and increased microbial activity. They have also been shown to have a suppressive effect on plant pathogens and soil-borne diseases. The efficacy is influenced by a number of factors, such as the preparation process, the type of source the conditions of the brewing process, and the environment of the crops. Adding nutrients to compost tea can be beneficial for disease suppression, although it can trigger the regrowth of human pathogens like E. coli and Salmonella.
Compost extract
Compost extracts are unfermented or non-brewed extracts of leached compost contents dissolved in any solvent.
Commercial sale
Compost is sold as bagged potting mixes in garden centers and other outlets. This may include composted materials such as manure and peat but is also likely to contain loam, fertilizers, sand, grit, etc. Varieties include multi-purpose composts designed for most aspects of planting, John Innes formulations, grow bags, designed to have crops such as tomatoes directly planted into them. There are also a range of specialist composts available, e.g. for vegetables, orchids, houseplants, hanging baskets, roses, ericaceous plants, seedlings, potting on, etc.
Other
Compost can also be used for land and stream reclamation, wetland construction, and landfill cover.
The temperatures generated by compost can be used to heat greenhouses, such as by being placed around the outside edges.
Regulations
There are process and product guidelines in Europe that date to the early 1980s (Germany, the Netherlands, Switzerland) and only more recently in the UK and the US. In both these countries, private trade associations within the industry have established loose standards, some say as a stop-gap measure to discourage independent government agencies from establishing tougher consumer-friendly standards. Compost is regulated in Canada and Australia as well.
EPA Class A and B guidelines in the United States were developed solely to manage the processing and beneficial reuse of sludge, also now called biosolids, following the US EPA ban of ocean dumping. About 26 American states now require composts to be processed according to these federal protocols for pathogen and vector control, even though the application to non-sludge materials has not been scientifically tested. An example is that green waste composts are used at much higher rates than sludge composts were ever anticipated to be applied at. U.K guidelines also exist regarding compost quality, as well as Canadian, Australian, and the various European states.
In the United States, some compost manufacturers participate in a testing program offered by a private lobbying organization called the U.S. Composting Council. The USCC was originally established in 1991 by Procter & Gamble to promote composting of disposable diapers, following state mandates to ban diapers in landfills, which caused a national uproar. Ultimately the idea of composting diapers was abandoned, partly since it was not proven scientifically to be possible, and mostly because the concept was a marketing stunt in the first place. After this, composting emphasis shifted back to recycling organic wastes previously destined for landfills. There are no bonafide quality standards in America, but the USCC sells a seal called "Seal of Testing Assurance" (also called "STA"). For a considerable fee, the applicant may display the USCC logo on products, agreeing to volunteer to customers a current laboratory analysis that includes parameters such as nutrients, respiration rate, salt content, pH, and limited other indicators.
Many countries such as Wales and some individual cities such as Seattle and San Francisco require food and yard waste to be sorted for composting (San Francisco Mandatory Recycling and Composting Ordinance).
The USA is the only Western country that does not distinguish sludge-source compost from green-composts, and by default 50% of US states expect composts to comply in some manner with the federal EPA 503 rule promulgated in 1984 for sludge products.
There are health risk concerns about PFASs ("forever chemicals") levels in compost derived from sewage sledge sourced biosolids, and EPA has not set health risk standards for this. The Sierra Club recommends that home gardeners avoid the use of sewage sludge-base fertilizer and compost, in part due to potentially high levels of PFASs. The EPA PFAS Strategic Roadmap initiative, running from 2021 to 2024, will consider the full lifecycle of PFAS including health risks of PFAS in wastewater sludge.
History
Composting dates back to at least the early Roman Empire and was mentioned as early as Cato the Elder's 160 BCE piece . Traditionally, composting involved piling organic materials until the next planting season, at which time the materials would have decayed enough to be ready for use in the soil. Methodologies for organic composting were part of traditional agricultural systems around the world.
Composting began to modernize somewhat in the 1920s in Europe as a tool for organic farming. The first industrial station for the transformation of urban organic materials into compost was set up in Wels, Austria, in the year 1921. Early proponents of composting in farming include Rudolf Steiner, founder of a farming method called biodynamics, and Annie Francé-Harrar, who was appointed on behalf of the government in Mexico and supported the country in 1950–1958 to set up a large humus organization in the fight against erosion and soil degradation. Sir Albert Howard, who worked extensively in India on sustainable practices, and Lady Eve Balfour were also major proponents of composting. Modern scientific composting was imported to America by the likes of J. I. Rodale – founder of Rodale, Inc. Organic Gardening, and others involved in the organic farming movement.
| Technology | Agronomical techniques | null |
5974 | https://en.wikipedia.org/wiki/Corundum | Corundum | Corundum is a crystalline form of aluminium oxide () typically containing traces of iron, titanium, vanadium, and chromium. It is a rock-forming mineral. It is a naturally transparent material, but can have different colors depending on the presence of transition metal impurities in its crystalline structure. Corundum has two primary gem varieties: ruby and sapphire. Rubies are red due to the presence of chromium, and sapphires exhibit a range of colors depending on what transition metal is present. A rare type of sapphire, padparadscha sapphire, is pink-orange.
The name "corundum" is derived from the Tamil-Dravidian word kurundam (ruby-sapphire) (appearing in Sanskrit as kuruvinda).
Because of corundum's hardness (pure corundum is defined to have 9.0 on the Mohs scale), it can scratch almost all other minerals. It is commonly used as an abrasive on sandpaper and on large tools used in machining metals, plastics, and wood. Emery, a variety of corundum with no value as a gemstone, is commonly used as an abrasive. It is a black granular form of corundum, in which the mineral is intimately mixed with magnetite, hematite, or hercynite.
In addition to its hardness, corundum has a density of , which is unusually high for a transparent mineral composed of the low-atomic mass elements aluminium and oxygen.
Geology and occurrence
Corundum occurs as a mineral in mica schist, gneiss, and some marbles in metamorphic terranes. It also occurs in low-silica igneous syenite and nepheline syenite intrusives. Other occurrences are as masses adjacent to ultramafic intrusives, associated with lamprophyre dikes and as large crystals in pegmatites. It commonly occurs as a detrital mineral in stream and beach sands because of its hardness and resistance to weathering. The largest documented single crystal of corundum measured about , and weighed . The record has since been surpassed by certain synthetic boules.
Corundum for abrasives is mined in Zimbabwe, Pakistan, Afghanistan, Russia, Sri Lanka, and India. Historically it was mined from deposits associated with dunites in North Carolina, US, and from a nepheline syenite in Craigmont, Ontario. Emery-grade corundum is found on the Greek island of Naxos and near Peekskill, New York, US. Abrasive corundum is synthetically manufactured from bauxite.
Four corundum axes dating to 2500 BC from the Liangzhu culture and Sanxingcun culture (the latter of which is located in Jintan District) have been discovered in China.
Synthetic corundum
In 1837, Marc Antoine Gaudin made the first synthetic rubies by reacting alumina at a high temperature with a small amount of chromium as a colourant.
In 1847, J. J. Ebelmen made white synthetic sapphires by reacting alumina in boric acid.
In 1877, Frenic and Freil made crystal corundum from which small stones could be cut. Frimy and Auguste Verneuil manufactured artificial ruby by fusing and with a little chromium at temperatures above .
In 1903, Verneuil announced that he could produce synthetic rubies on a commercial scale using this flame fusion process.
The Verneuil process allows the production of flawless single-crystal sapphire and ruby gems of much larger size than normally found in nature. It is also possible to grow gem-quality synthetic corundum by flux-growth and hydrothermal synthesis. Because of the simplicity of the methods involved in corundum synthesis, large quantities of these crystals have become available on the market at a fraction of the cost of natural stones.
Synthetic corundum has a lower environmental impact than natural corundum by avoiding destructive mining and conserving resources. However, its production is energy-intensive, contributing to carbon emissions if fossil fuels are used, and involves chemicals that can pose risks.
Apart from ornamental uses, synthetic corundum is also used to produce mechanical parts (tubes, rods, bearings, and other machined parts), scratch-resistant optics, scratch-resistant watch crystals, instrument windows for satellites and spacecraft (because of its transparency in the ultraviolet to infrared range), and laser components. For example, the KAGRA gravitational wave detector's main mirrors are sapphires, and Advanced LIGO considered sapphire mirrors. Corundum has also found use in the development of ceramic armour thanks to its high hardiness.
Structure and physical properties
Corundum crystallizes with trigonal symmetry in the space group and has the lattice parameters and at standard conditions. The unit cell contains six formula units.
The toughness of corundum is sensitive to surface roughness and crystallographic orientation. It may be 6–7 MPa·m for synthetic crystals, and around 4 MPa·m for natural.
In the lattice of corundum, the oxygen atoms form a slightly distorted hexagonal close packing, in which two-thirds of the octahedral sites between the oxygen ions are occupied by aluminium ions. The absence of aluminium ions from one of the three sites breaks the symmetry of the hexagonal close packing, reducing the space group symmetry to and the crystal class to trigonal. The structure of corundum is sometimes described as a pseudohexagonal structure.
The Young's modulus of corundum (sapphire) has been reported by many different sources with values varying between 300 and 500 GPa, but a commonly cited value used for calculations is 345 GPa. The Young's modulus is temperature dependent, and has been reported in the [0001] direction as 435 GPa at 323 K and 386 GPa at 1,273 K. The shear modulus of corundum is 145 GPa, and the bulk modulus is 240 GPa.
Single crystal corundum fibers have potential applications in high temperature composites, and the Young's modulus is highly dependent on the crystallographic orientation along the fiber axis. The fiber exhibits a max modulus of 461 GPa when the crystallographic c-axis [0001] is aligned with the fiber axis, and minimum moduli ~373 GPa when a direction 45° away from the c-axis is aligned with the fiber axis.
The hardness of corundum measured by indentation at low loads of 1-2 N has been reported as 22-23 GPa in major crystallographic planes: (0001) (basal plane), (100) (rhombohedral plane), (110) (prismatic plane), and (102). The hardness can drop significantly under high indentation loads. The drop with respect to load varies with the crystallographic plane due to the difference in crack resistance and propagation between directions. One extreme case is seen in the (0001) plane, where the hardness under high load (~1 kN) is nearly half the value under low load (1-2 N).
Polycrystalline corundum formed through sintering and treated with a hot isostatic press process can achieve grain sizes in the range of 0.55-0.7 μm, and has been measured to have four-point bending strength between 600 and 700 MPa and three-point bending strength between 750 and 900 MPa.
Structure type
Because of its prevalence, corundum has also become the name of a major structure type (corundum type) found in various binary and ternary compounds.
| Physical sciences | Minerals | Earth science |
5987 | https://en.wikipedia.org/wiki/Coal | Coal | Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen.
Coal is a type of fossil fuel, formed when dead plant matter decays into peat which is converted into coal by the heat and pressure of deep burial over millions of years. Vast deposits of coal originate in former wetlands called coal forests that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times.
Coal is used primarily as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020, coal supplied about a quarter of the world's primary energy and over a third of its electricity. Some iron and steel-making and other industrial processes burn coal.
The extraction and burning of coal damages the environment, causing premature death and illness, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. Fourteen billion tonnes of carbon dioxide were emitted by burning coal in 2020, which is 40% of total fossil fuel emissions and over 25% of total global greenhouse gas emissions. As part of worldwide energy transition, many countries have reduced or eliminated their use of coal power. The United Nations Secretary General asked governments to stop building new coal plants by 2020.
Global coal use was 8.3 billion tonnes in 2022, and is set to remain at record levels in 2023. To meet the Paris Agreement target of keeping global warming below coal use needs to halve from 2020 to 2030, and "phasing down" coal was agreed upon in the Glasgow Climate Pact.
The largest consumer and importer of coal in 2020 was China, which accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia.
Etymology
The word originally took the form col in Old English, from reconstructed Proto-Germanic *kula(n), from Proto-Indo-European root *g(e)u-lo- "live coal". Germanic cognates include the Old Frisian , Middle Dutch , Dutch , Old High German , German and Old Norse . Irish is also a cognate via the Indo-European root.
Formation of coal
The conversion of dead vegetation into coal is called coalification. At various times in the geologic past, the Earth had dense forests in low-lying areas. In these wetlands, the process of coalification began when dead plant matter was protected from oxidation, usually by mud or acidic water, and was converted into peat. The resulting peat bogs, which trapped immense amounts of carbon, were eventually deeply buried by sediments. Then, over millions of years, the heat and pressure of deep burial caused the loss of water, methane and carbon dioxide and increased the proportion of carbon. The grade of coal produced depended on the maximum pressure and temperature reached, with lignite (also called "brown coal") produced under relatively mild conditions, and sub-bituminous coal, bituminous coal, or anthracite coal (also called "hard coal" or "black coal") produced in turn with increasing temperature and pressure.
Of the factors involved in coalification, temperature is much more important than either pressure or time of burial. Subbituminous coal can form at temperatures as low as while anthracite requires a temperature of at least .
Although coal is known from most geologic periods, 90% of all coal beds were deposited in the Carboniferous and Permian periods. Paradoxically, this was during the Late Paleozoic icehouse, a time of global glaciation. However, the drop in global sea level accompanying the glaciation exposed continental shelves that had previously been submerged, and to these were added wide river deltas produced by increased erosion due to the drop in base level. These widespread areas of wetlands provided ideal conditions for coal formation. The rapid formation of coal ended with the coal gap in the Permian–Triassic extinction event, where coal is rare.
Favorable geography alone does not explain the extensive Carboniferous coal beds. Other factors contributing to rapid coal deposition were high oxygen levels, above 30%, that promoted intense wildfires and formation of charcoal that was all but indigestible by decomposing organisms; high carbon dioxide levels that promoted plant growth; and the nature of Carboniferous forests, which included lycophyte trees whose determinate growth meant that carbon was not tied up in heartwood of living trees for long periods.
One theory suggested that about 360 million years ago, some plants evolved the ability to produce lignin, a complex polymer that made their cellulose stems much harder and more woody. The ability to produce lignin led to the evolution of the first trees. But bacteria and fungi did not immediately evolve the ability to decompose lignin, so the wood did not fully decay but became buried under sediment, eventually turning into coal. About 300 million years ago, mushrooms and other fungi developed this ability, ending the main coal-formation period of earth's history. Although some authors pointed at some evidence of lignin degradation during the Carboniferous, and suggested that climatic and tectonic factors were a more plausible explanation, reconstruction of ancestral enzymes by phylogenetic analysis corroborated a hypothesis that lignin degrading enzymes appeared in fungi approximately 200 MYa.
One likely tectonic factor was the Central Pangean Mountains, an enormous range running along the equator that reached its greatest elevation near this time. Climate modeling suggests that the Central Pangean Mountains contributed to the deposition of vast quantities of coal in the late Carboniferous. The mountains created an area of year-round heavy precipitation, with no dry season typical of a monsoon climate. This is necessary for the preservation of peat in coal swamps.
Coal is known from Precambrian strata, which predate land plants. This coal is presumed to have originated from residues of algae.
Sometimes coal seams (also known as coal beds) are interbedded with other sediments in a cyclothem. Cyclothems are thought to have their origin in glacial cycles that produced fluctuations in sea level, which alternately exposed and then flooded large areas of continental shelf.
Chemistry of coalification
The woody tissue of plants is composed mainly of cellulose, hemicellulose, and lignin. Modern peat is mostly lignin, with a content of cellulose and hemicellulose ranging from 5% to 40%. Various other organic compounds, such as waxes and nitrogen- and sulfur-containing compounds, are also present. Lignin has a weight composition of about 54% carbon, 6% hydrogen, and 30% oxygen, while cellulose has a weight composition of about 44% carbon, 6% hydrogen, and 49% oxygen. Bituminous coal has a composition of about 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. The low oxygen content of coal shows that coalification removed most of the oxygen and much of the hydrogen a process called carbonization.
Carbonization proceeds primarily by dehydration, decarboxylation, and demethanation. Dehydration removes water molecules from the maturing coal via reactions such as
2 R–OH → R–O–R + H2O
Decarboxylation removes carbon dioxide from the maturing coal:
RCOOH → RH + CO2
while demethanation proceeds by reaction such as
2 R-CH3 → R-CH2-R + CH4
R-CH2-CH2-CH2-R → R-CH=CH-R + CH4
In these formulas, R represents the remainder of a cellulose or lignin molecule to which the reacting groups are attached.
Dehydration and decarboxylation take place early in coalification, while demethanation begins only after the coal has already reached bituminous rank. The effect of decarboxylation is to reduce the percentage of oxygen, while demethanation reduces the percentage of hydrogen. Dehydration does both, and (together with demethanation) reduces the saturation of the carbon backbone (increasing the number of double bonds between carbon).
As carbonization proceeds, aliphatic compounds convert to aromatic compounds. Similarly, aromatic rings fuse into polyaromatic compounds (linked rings of carbon atoms). The structure increasingly resembles graphene, the structural element of graphite.
Chemical changes are accompanied by physical changes, such as decrease in average pore size.
Macerals
Macerals are coalified plant parts that retain the morphology and some properties of the original plant. In many coals, individual macerals can be identified visually. Some macerals include:
vitrinite, derived from woody parts
lipinite, derived from spores and algae
inertite, derived from woody parts that had been burnt in prehistoric times
huminite, a precursor to vitrinite.
In coalification huminite is replaced by vitreous (shiny) vitrinite. Maturation of bituminous coal is characterized by bitumenization, in which part of the coal is converted to bitumen, a hydrocarbon-rich gel. Maturation to anthracite is characterized by debitumenization (from demethanation) and the increasing tendency of the anthracite to break with a conchoidal fracture, similar to the way thick glass breaks.
Types
As geological processes apply pressure to dead biotic material over time, under suitable conditions, its metamorphic grade or rank increases successively into:
Peat, a precursor of coal
Lignite, or brown coal, the lowest rank of coal, most harmful to health when burned, used almost exclusively as fuel for electric power generation
Sub-bituminous coal, whose properties range between those of lignite and those of bituminous coal, is used primarily as fuel for steam-electric power generation.
Bituminous coal, a dense sedimentary rock, usually black, but sometimes dark brown, often with well-defined bands of bright and dull material. It is used primarily as fuel in steam-electric power generation and to make coke. Known as steam coal in the UK, and historically used to raise steam in steam locomotives and ships
Anthracite coal, the highest rank of coal, is a harder, glossy black coal used primarily for residential and commercial space heating.
Graphite, a difficult to ignite coal which is mostly used in pencils, or powdered for lubrication.
Cannel coal (sometimes called "candle coal"), a variety of fine-grained, high-rank coal with significant hydrogen content, which consists primarily of liptinite. It is related to boghead coal.
There are several international standards for coal. The classification of coal is generally based on the content of volatiles. However the most important distinction is between thermal coal (also known as steam coal), which is burnt to generate electricity via steam; and metallurgical coal (also known as coking coal), which is burnt at high temperature to make steel.
Hilt's law is a geological observation that (within a small area) the deeper the coal is found, the higher its rank (or grade). It applies if the thermal gradient is entirely vertical; however, metamorphism may cause lateral changes of rank, irrespective of depth. For example, some of the coal seams of the Madrid, New Mexico coal field were partially converted to anthracite by contact metamorphism from an igneous sill while the remainder of the seams remained as bituminous coal.
History
The earliest recognized use is from the Shenyang area of China where by 4000 BC Neolithic inhabitants had begun carving ornaments from black lignite. Coal from the Fushun mine in northeastern China was used to smelt copper as early as 1000 BC. Marco Polo, the Italian who traveled to China in the 13th century, described coal as "black stones ... which burn like logs", and said coal was so plentiful, people could take three hot baths a week. In Europe, the earliest reference to the use of coal as fuel is from the geological treatise On Stones (Lap. 16) by the Greek scientist Theophrastus (c. 371–287 BC):
Outcrop coal was used in Britain during the Bronze Age (3000–2000 BC), where it formed part of funeral pyres. In Roman Britain, with the exception of two modern fields, "the Romans were exploiting coals in all the major coalfields in England and Wales by the end of the second century AD". Evidence of trade in coal, dated to about AD 200, has been found at the Roman settlement at Heronbridge, near Chester; and in the Fenlands of East Anglia, where coal from the Midlands was transported via the Car Dyke for use in drying grain. Coal cinders have been found in the hearths of villas and Roman forts, particularly in Northumberland, dated to around AD 400. In the west of England, contemporary writers described the wonder of a permanent brazier of coal on the altar of Minerva at Aquae Sulis (modern day Bath), although in fact easily accessible surface coal from what became the Somerset coalfield was in common use in quite lowly dwellings locally. Evidence of coal's use for iron-working in the city during the Roman period has been found. In Eschweiler, Rhineland, deposits of bituminous coal were used by the Romans for the smelting of iron ore.
No evidence exists of coal being of great importance in Britain before about AD 1000, the High Middle Ages. Coal came to be referred to as "seacoal" in the 13th century; the wharf where the material arrived in London was known as Seacoal Lane, so identified in a charter of King Henry III granted in 1253. Initially, the name was given because much coal was found on the shore, having fallen from the exposed coal seams on cliffs above or washed out of underwater coal outcrops, but by the time of Henry VIII, it was understood to derive from the way it was carried to London by sea. In 1257–1259, coal from Newcastle upon Tyne was shipped to London for the smiths and lime-burners building Westminster Abbey. Seacoal Lane and Newcastle Lane, where coal was unloaded at wharves along the River Fleet, still exist.
These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the 13th century, when underground extraction by shaft mining or adits was developed. The alternative name was "pitcoal", because it came from mines.
Cooking and home heating with coal (in addition to firewood or instead of it) has been done in various times and places throughout human history, especially in times and places where ground-surface coal was available and firewood was scarce, but a widespread reliance on coal for home hearths probably never existed until such a switch in fuels happened in London in the late sixteenth and early seventeenth centuries. Historian Ruth Goodman has traced the socioeconomic effects of that switch and its later spread throughout Britain and suggested that its importance in shaping the industrial adoption of coal has been previously underappreciated.
The development of the Industrial Revolution led to the large-scale use of coal, as the steam engine took over from the water wheel. In 1700, five-sixths of the world's coal was mined in Britain. Britain would have run out of suitable sites for watermills by the 1830s if coal had not been available as a source of energy. In 1947 there were some 750,000 miners in Britain, but the last deep coal mine in the UK closed in 2015.
A grade between bituminous coal and anthracite was once known as "steam coal" as it was widely used as a fuel for steam locomotives. In this specialized use, it is sometimes known as "sea coal" in the United States. Small "steam coal", also called dry small steam nuts (DSSN), was used as a fuel for domestic water heating.
Coal played an important role in industry in the 19th and 20th century. The predecessor of the European Union, the European Coal and Steel Community, was based on the trading of this commodity.
Coal continues to arrive on beaches around the world from both natural erosion of exposed coal seams and windswept spills from cargo ships. Many homes in such areas gather this coal as a significant, and sometimes primary, source of home heating fuel.
Composition
Coal consists mainly of a black mixture of diverse organic compounds and polymers. Of course, several kinds of coals exist, with variable dark colors and variable compositions. Young coals (brown coal, lignite) are not black. The two main black coals are bituminous, which is more abundant, and anthracite. The % carbon in coal follows the order anthracite > bituminous > lignite > brown coal. The fuel value of coal varies in the same order. Some anthracite deposits contain pure carbon in the form of graphite.
For bituminous coal, the elemental composition on a dry, ash-free basis of 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. This composition reflects partly the composition of the precursor plants. The second main fraction of coal is ash, an undesirable, noncombustable mixture of inorganic minerals. The composition of ash is often discussed in terms of oxides obtained after combustion in air:
Of particular interest is the sulfur content of coal, which can vary from less than 1% to as much as 4%. Most of the sulfur and most of the nitrogen is incorporated into the organic fraction in the form of organosulfur compounds and organonitrogen compounds. This sulfur and nitrogen are strongly bound within the hydrocarbon matrix. These elements are released as SO2 and NOx upon combustion. They cannot be removed, economically at least, otherwise. Some coals contain inorganic sulfur, mainly in the form of iron pyrite (FeS2). Being a dense mineral, it can be removed from coal by mechanical means, e.g. by froth flotation. Some sulfate occurs in coal, especially weathered samples. It is not volatilized and can be removed by washing.
Minor components include:
As minerals, Hg, As, and Se are not problematic to the environment, especially since they are only trace components. They become however mobile (volatile or water-soluble) when these minerals are combusted.
Uses
Most coal is used as fuel. 27.6% of world energy was supplied by coal in 2017 and Asia used almost three-quarters of it. Other large-scale applications also exist. The energy density of coal is roughly 24 megajoules per kilogram (approximately 6.7 kilowatt-hours per kg). For a coal power plant with a 40% efficiency, it takes an estimated of coal to power a 100 W lightbulb for one year.
Electricity generation
In 2022, 68% of global coal use was used for electricity generation.
Coal burnt in coal power stations to generate electricity is called thermal coal. It is usually pulverized and then burned in a furnace with a boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity. The thermodynamic efficiency of this process varies between about 25% and 50% depending on the pre-combustion treatment, turbine technology (e.g. supercritical steam generator) and the age of the plant.
A few integrated gasification combined cycle (IGCC) power plants have been built, which burn coal more efficiently. Instead of pulverizing the coal and burning it directly as fuel in the steam-generating boiler, the coal is gasified to create syngas, which is burned in a gas turbine to produce electricity (just like natural gas is burned in a turbine). Hot exhaust gases from the turbine are used to raise steam in a heat recovery steam generator which powers a supplemental steam turbine. The overall plant efficiency when used to provide combined heat and power can reach as much as 94%. IGCC power plants emit less local pollution than conventional pulverized coal-fueled plants. Other ways to use coal are as coal-water slurry fuel (CWS), which was developed in the Soviet Union, or in an MHD topping cycle. However these are not widely used due to lack of profit.
In 2017 38% of the world's electricity came from coal, the same percentage as 30 years previously. In 2018 global installed capacity was 2TW (of which 1TW is in China) which was 30% of total electricity generation capacity. The most dependent major country is South Africa, with over 80% of its electricity generated by coal; but China alone generates more than half of the world's coal-generated electricity. Efforts around the world to reduce the use of coal have led some regions to switch to natural gas and renewable energy. In 2018 coal-fired power station capacity factor averaged 51%, that is they operated for about half their available operating hours.
Coke
Coke is a solid carbonaceous residue that is used in manufacturing steel and other iron-containing products. Coke is made when metallurgical coal (also known as coking coal) is baked in an oven without oxygen at temperatures as high as 1,000 °C, driving off the volatile constituents and fusing together the fixed carbon and residual ash. Metallurgical coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. The carbon monoxide produced by its combustion reduces hematite (an iron oxide) to iron.
Pig iron, which is too rich in dissolved carbon, is also produced.
The coke must be strong enough to resist the weight of overburden in the blast furnace, which is why coking coal is so important in making steel using the conventional route. Coke from coal is grey, hard, and porous and has a heating value of 29.6 MJ/kg. Some coke-making processes produce byproducts, including coal tar, ammonia, light oils, and coal gas.
Petroleum coke (petcoke) is the solid residue obtained in oil refining, which resembles coke but contains too many impurities to be useful in metallurgical applications.
Production of chemicals
Chemicals have been produced from coal since the 1950s. Coal can be used as a feedstock in the production of a wide range of chemical fertilizers and other chemical products. The main route to these products was coal gasification to produce syngas. Primary chemicals that are produced directly from the syngas include methanol, hydrogen, and carbon monoxide, which are the chemical building blocks from which a whole spectrum of derivative chemicals are manufactured, including olefins, acetic acid, formaldehyde, ammonia, urea, and others. The versatility of syngas as a precursor to primary chemicals and high-value derivative products provides the option of using coal to produce a wide range of commodities. In the 21st century, however, the use of coal bed methane is becoming more important.
Because the slate of chemical products that can be made via coal gasification can in general also use feedstocks derived from natural gas and petroleum, the chemical industry tends to use whatever feedstocks are most cost-effective. Therefore, interest in using coal tended to increase for higher oil and natural gas prices and during periods of high global economic growth that might have strained oil and gas production.
Coal to chemical processes require substantial quantities of water. Much coal to chemical production is in China where coal dependent provinces such as Shanxi are struggling to control its pollution.
Liquefaction
Coal can be converted directly into synthetic fuels equivalent to gasoline or diesel by hydrogenation or carbonization. Coal liquefaction emits more carbon dioxide than liquid fuel production from crude oil. Mixing in biomass and using carbon capture and storage (CCS) would emit slightly less than the oil process but at a high cost. State owned China Energy Investment runs a coal liquefaction plant and plans to build 2 more.
Coal liquefaction may also refer to the cargo hazard when shipping coal.
Gasification
Coal gasification, as part of an integrated gasification combined cycle (IGCC) coal-fired power station, is used to produce syngas, a mixture of carbon monoxide (CO) and hydrogen (H2) gas to fire gas turbines to produce electricity. Syngas can also be converted into transportation fuels, such as gasoline and diesel, through the Fischer–Tropsch process; alternatively, syngas can be converted into methanol, which can be blended into fuel directly or converted to gasoline via the methanol to gasoline process. Gasification combined with Fischer–Tropsch technology was used by the Sasol chemical company of South Africa to make chemicals and motor vehicle fuels from coal.
During gasification, the coal is mixed with oxygen and steam while also being heated and pressurized. During the reaction, oxygen and water molecules oxidize the coal into carbon monoxide (CO), while also releasing hydrogen gas (H2). This used to be done in underground coal mines, and also to make town gas, which was piped to customers to burn for illumination, heating, and cooking.
3C (as Coal) + O2 + H2O → H2 + 3CO
If the refiner wants to produce gasoline, the syngas is routed into a Fischer–Tropsch reaction. This is known as indirect coal liquefaction. If hydrogen is the desired end-product, however, the syngas is fed into the water gas shift reaction, where more hydrogen is liberated:
CO + H2O → CO2 + H2
Coal industry
Mining
About 8,000 Mt of coal are produced annually, about 90% of which is hard coal and 10% lignite. just over half is from underground mines. The coal mining industry employs almost 2.7 million workers. More accidents occur during underground mining than surface mining. Not all countries publish mining accident statistics so worldwide figures are uncertain, but it is thought that most deaths occur in coal mining accidents in China: in 2017 there were 375 coal mining related deaths in China. Most coal mined is thermal coal (also called steam coal as it is used to make steam to generate electricity) but metallurgical coal (also called "metcoal" or "coking coal" as it is used to make coke to make iron) accounts for 10% to 15% of global coal use.
As a traded commodity
China mines almost half the world's coal, followed by India with about a tenth. At 471 Mt and a 34% share of global exports, Indonesia was the largest exporter by volume in 2022, followed by Australia with 344 Mt and Russia with 224 Mt. Other major exporters of coal are the United States, South Africa, Colombia, and Canada. In 2022, China, India, and Japan were the biggest importers of coal, importing 301, 228, and 184 Mt respectively. Russia is increasingly orienting its coal exports from Europe to Asia as Europe transitions to renewable energy and subjects Russia to sanctions over its invasion of Ukraine.
The price of metallurgical coal is volatile and much higher than the price of thermal coal because metallurgical coal must be lower in sulfur and requires more cleaning. Coal futures contracts provide coal producers and the electric power industry an important tool for hedging and risk management.
In some countries, new onshore wind or solar generation already costs less than coal power from existing plants.
However, for China this is forecast for the early 2020s and for southeast Asia not until the late 2020s. In India, building new plants is uneconomic and, despite being subsidized, existing plants are losing market share to renewables.
In many countries in the Global North, there is a move away from the use of coal and former mine sites are being used as a tourist attraction.
Market trends
In 2022, China used 4520 Mt of coal, comprising more than half of global coal consumption. India, the European Union, and the United States, were the next largest consumers of coal, using 1162, 461, and 455 Mt respectively. Over the past decade, China has almost always accounted for the lion's share of the global growth in coal demand. Therefore, international market trends depend on Chinese energy policy.
Although the government effort to reduce air pollution in China means that the global long-term trend is to burn less coal, the short and medium term trends may differ, in part due to Chinese financing of new coal-fired power plants in other countries.
Preliminary analysis by International Energy Agency (IEA) indicates that global coal exports reached an all-time high in 2023. Through to 2026, the IEA expects global coal trade to decline by about 12%, driven by growing domestic production in coal-intensive economies such as China and India and coal phase-out plans elsewhere, such as in Europe. While thermal coal exports are expected to decline by about 16% by 2026, exports of metallurgical coal are expected to slightly increase by almost 2%.
Damage to human health
The use of coal as fuel causes health problems and deaths. The mining and processing of coal causes air and water pollution. Coal-powered plants emit nitrogen oxides, sulfur dioxide, particulate pollution, and heavy metals, which adversely affect human health. Coal bed methane extraction is important to avoid mining accidents.
The deadly London smog was caused primarily by the heavy use of coal. Globally coal is estimated to cause 800,000 premature deaths every year, mostly in India and China.
Burning coal is a major contributor to sulfur dioxide emissions, which creates PM2.5 particulates, the most dangerous form of air pollution.
Coal smokestack emissions cause asthma, strokes, reduced intelligence, artery blockages, heart attacks, congestive heart failure, cardiac arrhythmias, mercury poisoning, arterial occlusion, and lung cancer.
Annual health costs in Europe from use of coal to generate electricity are estimated at up to €43 billion.
In China, early deaths due to air pollution coal plants have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Improvements to China's air quality and human health would grow with more stringent climate policies, mainly because the country's energy is so heavily reliant on coal. And there would be a net economic benefit.
A 2017 study in the Economic Journal found that for Britain during the period 1851–1860, "a one standard deviation increase in coal use raised infant mortality by 6–8% and that industrial coal use explains roughly one-third of the urban mortality penalty observed during this period."
Breathing in coal dust causes coalworker's pneumoconiosis or "black lung", so called because the coal dust literally turns the lungs black. In the US alone, it is estimated that 1,500 former employees of the coal industry die every year from the effects of breathing in coal mine dust.
Huge amounts of coal ash and other waste is produced annually. Use of coal generates hundreds of millions of tons of ash and other waste products every year. These include fly ash, bottom ash, and flue-gas desulfurization sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals, along with non-metals such as selenium.
Around 10% of coal is ash. Coal ash is hazardous and toxic to human beings and some other living things. Coal ash contains the radioactive elements uranium and thorium. Coal ash and other solid combustion byproducts are stored locally and escape in various ways that expose those living near coal plants to radiation and environmental toxics.
Damage to the environment
Coal mining, coal combustion wastes, and flue gas are causing major environmental damage.
Water systems are affected by coal mining. For example, the mining of coal affects groundwater and water table levels and acidity. Spills of fly ash, such as the Kingston Fossil Plant coal fly ash slurry spill, can also contaminate land and waterways, and destroy homes. Power stations that burn coal also consume large quantities of water. This can affect the flows of rivers, and has consequential impacts on other land uses. In areas of water scarcity, such as the Thar Desert in Pakistan, coal mining and coal power plants contribute to the depletion of water resources.
One of the earliest known impacts of coal on the water cycle was acid rain. In 2014, approximately 100 Tg/S of sulfur dioxide (SO2) was released, over half of which was from burning coal. After release, the sulfur dioxide is oxidized to H2SO4 which scatters solar radiation, hence its increase in the atmosphere exerts a cooling effect on the climate. This beneficially masks some of the warming caused by increased greenhouse gases. However, the sulfur is precipitated out of the atmosphere as acid rain in a matter of weeks, whereas carbon dioxide remains in the atmosphere for hundreds of years. Release of SO2 also contributes to the widespread acidification of ecosystems.
Disused coal mines can also cause issues. Subsidence can occur above tunnels, causing damage to infrastructure or cropland. Coal mining can also cause long lasting fires, and it has been estimated that thousands of coal seam fires are burning at any given time. For example, Brennender Berg has been burning since 1668, and is still burning in the 21st century.
The production of coke from coal produces ammonia, coal tar, and gaseous compounds as byproducts which if discharged to land, air or waterways can pollute the environment. The Whyalla steelworks is one example of a coke producing facility where liquid ammonia was discharged to the marine environment.
Climate change
The largest and most long-term effect of coal use is the release of carbon dioxide, a greenhouse gas that causes climate change. Coal-fired power plants were the single largest contributor to the growth in global CO2 emissions in 2018, 40% of the total fossil fuel emissions, and more than a quarter of total emissions. Coal mining can emit methane, another greenhouse gas.
In 2016 world gross carbon dioxide emissions from coal usage were 14.5 gigatonnes. For every megawatt-hour generated, coal-fired electric power generation emits around a tonne of carbon dioxide, which is double the approximately 500 kg of carbon dioxide released by a natural gas-fired electric plant. The emission intensity of coal varies with type and generator technology and exceeds 1200 g per kWh in some countries. In 2013, the head of the UN climate agency advised that most of the world's coal reserves should be left in the ground to avoid catastrophic global warming. To keep global warming below 1.5 °C or 2 °C hundreds, or possibly thousands, of coal-fired power plants will need to be retired early.
Underground fires
Thousands of coal fires are burning around the world. Those burning underground can be difficult to locate and many cannot be extinguished. Fires can cause the ground above to subside, their combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. Lightning strikes are an important source of ignition. The coal continues to burn slowly back into the seam until oxygen (air) can no longer reach the flame front. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn an estimated 120 million tons of coal a year, emitting 360 million metric tons of CO2, amounting to 2–3% of the annual worldwide production of CO2 from fossil fuels.
Pollution mitigation and carbon capture
Systems and technologies exist to mitigate the health and environmental impact of burning coal for energy.
Precombustion treatment
Refined coal is the product of a coal-upgrading technology that removes moisture and certain pollutants from lower-rank coals such as sub-bituminous and lignite (brown) coals. It is one form of several precombustion treatments and processes for coal that alter coal's characteristics before it is burned. Thermal efficiency improvements are achievable by improved pre-drying (especially relevant with high-moisture fuel such as lignite or biomass). The goals of precombustion coal technologies are to increase efficiency and reduce emissions when the coal is burned. Precombustion technology can sometimes be used as a supplement to postcombustion technologies to control emissions from coal-fueled boilers.
Post combustion approaches
Post combustion approaches to mitigate pollution include flue-gas desulfurization, selective catalytic reduction, electrostatic precipitators, and fly ash reduction.
Carbon capture and storage
Carbon capture and storage (CCS) can be used to capture carbon dioxide from the flue gas of coal power plants and bury it securely in an underground reservoir. Between 1972 and 2017, plans were made to add CCS to enough coal and gas power plants to sequester 161 million tonnes of per year, but by 2021 98% of these plans had failed. Cost, the absence of measures to address long-term liability for stored CO2, and limited social acceptability have all contributed to project cancellations. As of 2024, CCS is in operation at only four coal power plants and one gas power plant worldwide.
"Clean coal" and "abated coal"
Since the mid-1980s, the term "clean coal" has been widely used with various meanings. Initially, "clean coal technology" referred to scrubbers and catalytic converters that reduced the pollutants that cause acid rain. The scope then expanded to include reduction of other pollutants such as mercury. Recently, the term has come to encompass the use of CCS to reduce greenhouse gas emissions (GHG). In political discourse, the phrase "clean coal" is sometimes used to suggest that coal itself can be clean. This suggestion is false: Technologies to mitigate emissions are implemented in the plants where coal is processed and burned, but coal as a product is intrinsically dirty.
In discussions on greenhouse gas emissions, another common term is "abatement" of coal use. In the 2023 United Nations Climate Change Conference, an agreement was reached to phase down unabated coal use. Since the term abated was not defined, the agreement was criticized for being open to abuse. Without a clear definition, is possible for fossil fuel use to be called "abated" if it uses CCS only in a minimal fashion, such as capturing only 30% of the emissions from a plant.
The IPCC considers fossil fuels to be unabated if they are "produced and used without interventions that substantially reduce the amount of GHG emitted throughout the life-cycle; for example, capturing 90% or more from power plants."
Economics
In 2018 was invested in coal supply but almost all for sustaining production levels rather than opening new mines.
In the long term coal and oil could cost the world trillions of dollars per year. Coal alone may cost Australia billions, whereas costs to some smaller companies or cities could be on the scale of millions of dollars. The economies most damaged by coal (via climate change) may be India and the US as they are the countries with the highest social cost of carbon. Bank loans to finance coal are a risk to the Indian economy.
China is the largest producer of coal in the world. It is the world's largest energy consumer, and coal in China supplies 60% of its primary energy. However two fifths of China's coal power stations are estimated to be loss-making.
Air pollution from coal storage and handling costs the US almost 200 dollars for every extra ton stored, due to PM2.5. Coal pollution costs the each year. Measures to cut air pollution benefit individuals financially and the economies of countries such as China.
Subsidies
Subsidies for coal in 2021 have been estimated at , not including electricity subsidies, and are expected to rise in 2022. G20 countries provide at least of government support per year for the production of coal, including coal-fired power: many subsidies are impossible to quantify but they include in domestic and international public finance, in fiscal support, and in state-owned enterprise (SOE) investments per year. In the EU state aid to new coal-fired plants is banned from 2020, and to existing coal-fired plants from 2025. As of 2018, government funding for new coal power plants was supplied by Exim Bank of China, the Japan Bank for International Cooperation and Indian public sector banks. Coal in Kazakhstan was the main recipient of coal consumption subsidies totalling US$2 billion in 2017. Coal in Turkey benefited from substantial subsidies in 2021.
Stranded assets
Some coal-fired power stations could become stranded assets, for example China Energy Investment, the world's largest power company, risks losing half its capital. However, state-owned electricity utilities such as Eskom in South Africa, Perusahaan Listrik Negara in Indonesia, Sarawak Energy in Malaysia, Taipower in Taiwan, EGAT in Thailand, Vietnam Electricity and EÜAŞ in Turkey are building or planning new plants. As of 2021 this may be helping to cause a carbon bubble which could cause financial instability if it bursts.
Politics
Countries building or financing new coal-fired power stations, such as China, India, Indonesia, Vietnam, Turkey and Bangladesh, face mounting international criticism for obstructing the aims of the Paris Agreement. In 2019, the Pacific Island nations (in particular Vanuatu and Fiji) criticized Australia for failing to cut their emissions at a faster rate than they were, citing concerns about coastal inundation and erosion. In May 2021, the G7 members agreed to end new direct government support for international coal power generation.
Cultural usage
Coal is the official state mineral of Kentucky, and the official state rock of Utah and West Virginia. These US states have a historic link to coal mining.
Some cultures hold that children who misbehave will receive only a lump of coal from Santa Claus for Christmas in their stockings instead of presents.
It is also customary and considered lucky in Scotland to give coal as a gift on New Year's Day. This occurs as part of first-footing and represents warmth for the year to come.
| Technology | Energy | null |
5992 | https://en.wikipedia.org/wiki/Traditional%20Chinese%20medicine | Traditional Chinese medicine | Traditional Chinese medicine (TCM) is an alternative medical practice drawn from traditional medicine in China. A large share of its claims are pseudoscientific, with the majority of treatments having no robust evidence of effectiveness or logical mechanism of action.
Medicine in traditional China encompassed a range of sometimes competing health and healing practices, folk beliefs, literati theory and Confucian philosophy, herbal remedies, food, diet, exercise, medical specializations, and schools of thought. TCM as it exists today has been described as a largely 20th century invention. In the early twentieth century, Chinese cultural and political modernizers worked to eliminate traditional practices as backward and unscientific. Traditional practitioners then selected elements of philosophy and practice and organized them into what they called "Chinese medicine" (Chinese: 中医 Zhongyi). In the 1950s, the Chinese government sought to revive traditional medicine (including legalizing previously banned practices) and sponsored the integration of TCM and Western medicine, and in the Cultural Revolution of the 1960s, promoted TCM as inexpensive and popular. The creation of modern TCM was largely spearheaded by Mao Zedong, despite the fact that, according to The Private Life of Chairman Mao, he did not believe in its effectiveness. After the opening of relations between the United States and China after 1972, there was great interest in the West for what is now called traditional Chinese medicine (TCM).
TCM is said to be based on such texts as Huangdi Neijing (The Inner Canon of the Yellow Emperor), and Compendium of Materia Medica, a sixteenth-century encyclopedic work, and includes various forms of herbal medicine, acupuncture, cupping therapy, gua sha, massage (tui na), bonesetter (die-da), exercise (qigong), and dietary therapy. TCM is widely used in the Sinosphere. One of the basic tenets is that the body's qi is circulating through channels called meridians having branches connected to bodily organs and functions. There is no evidence that meridians or vital energy exist. Concepts of the body and of disease used in TCM reflect its ancient origins and its emphasis on dynamic processes over material structure, similar to the humoral theory of ancient Greece and ancient Rome.
The demand for traditional medicines in China is a major generator of illegal wildlife smuggling, linked to the killing and smuggling of endangered animals. The Chinese authorities have engaged in attempts to crack down on illegal TCM-related wildlife smuggling.
Ancient history
Scholars in the history of medicine in China distinguish its doctrines and practice from those of present-day TCM. J. A. Jewell and S. M. Hillier state that the term "Traditional Chinese Medicine" became an established term due to the work of Dr. Kan-Wen Ma, a Western-trained medical doctor who was persecuted during the Cultural Revolution and immigrated to Britain, joining the University of London's Wellcome Institute for the History of Medicine. Ian Johnson says, on the other hand, that the English-language term "traditional Chinese medicine" was coined by "party propagandists" in 1955.
Nathan Sivin criticizes attempts to treat medicine and medical practices in traditional China as if they were a single system. Instead, he says, there were 2,000 years of "medical system in turmoil" and speaks of a "myth of an unchanging medical tradition". He urges that "Traditional medicine translated purely into terms of modern medicine becomes partly nonsensical, partly irrelevant, and partly mistaken; that is also true the other way around, a point easily overlooked." TJ Hinrichs observes that people in modern Western societies divide healing practices into biomedicine for the body, psychology for the mind, and religion for the spirit, but these distinctions are inadequate to describe medical concepts among Chinese historically and to a considerable degree today.
The medical anthropologist Charles Leslie writes that Chinese, Greco-Arabic, and Indian traditional medicines were all grounded in systems of correspondence that aligned the organization of society, the universe, and the human body and other forms of life into an "all-embracing order of things". Each of these traditional systems was organized with such qualities as heat and cold, wet and dry, light and darkness, qualities that also align the seasons, compass directions, and the human cycle of birth, growth, and death. They provided, Leslie continued, a "comprehensive way of conceiving patterns that ran through all of nature," and they "served as a classificatory and mnemonic device to observe health problems and to reflect upon, store, and recover empirical knowledge," but they were also "subject to stultifying theoretical elaboration, self-deception, and dogmatism."
The doctrines of Chinese medicine are rooted in books such as the Yellow Emperor's Inner Canon and the Treatise on Cold Damage, as well as in cosmological notions such as yin–yang and the five phases. The Compendium of Materia Medica dates back to around 1,100 BCE when only a few dozen drugs were described. By the end of the 16th century, the number of drugs documented had reached close to 1,900. And by the end of the last century, published records of CMM had reached 12,800 drugs." Starting in the 1950s, these precepts were standardized in the People's Republic of China, including attempts to integrate them with modern notions of anatomy and pathology. In the 1950s, the Chinese government promoted a systematized form of TCM.
Shang dynasty
Traces of therapeutic activities in China date from the Shang dynasty (14th–11th centuries BCE). Though the Shang did not have a concept of "medicine" as distinct from other health practices, their oracular inscriptions on bones and tortoise shells refer to illnesses that affected the Shang royal family: eye disorders, toothaches, bloated abdomen, and such. Shang elites usually attributed them to curses sent by their ancestors. There is currently no evidence that the Shang nobility used herbal remedies.
Stone and bone needles found in ancient tombs led Joseph Needham to speculate that acupuncture might have been carried out in the Shang dynasty. This being said, most historians now make a distinction between medical lancing (or bloodletting) and acupuncture in the narrower sense of using metal needles to attempt to treat illnesses by stimulating points along circulation channels ("meridians") in accordance with beliefs related to the circulation of "Qi". The earliest evidence for acupuncture in this sense dates to the second or first century BCE.
Han dynasty
The Yellow Emperor's Inner Canon (Huangdi Neijing), the oldest received work of Chinese medical theory, was compiled during the Han dynasty around the first century BCE on the basis of shorter texts from different medical lineages. Written in the form of dialogues between the legendary Yellow Emperor and his ministers, it offers explanations on the relation between humans, their environment, and the cosmos, on the contents of the body, on human vitality and pathology, on the symptoms of illness, and on how to make diagnostic and therapeutic decisions in light of all these factors. Unlike earlier texts like Recipes for Fifty-Two Ailments, which was excavated in the 1970s from the Mawangdui tomb that had been sealed in 168 BCE, the Inner Canon rejected the influence of spirits and the use of magic. It was also one of the first books in which the cosmological doctrines of Yinyang and the Five Phases were brought to a mature synthesis.
The Treatise on Cold Damage Disorders and Miscellaneous Illnesses (Shang Han Lun) was collated by Zhang Zhongjing sometime between 196 and 220 CE; at the end of the Han dynasty. Focusing on drug prescriptions rather than acupuncture, it was the first medical work to combine Yinyang and the Five Phases with drug therapy. This formulary was also the earliest public Chinese medical text to group symptoms into clinically useful "patterns" (zheng ) that could serve as targets for therapy. Having gone through numerous changes over time, the formulary now circulates as two distinct books: the Treatise on Cold Damage Disorders and the Essential Prescriptions of the Golden Casket, which were edited separately in the eleventh century, under the Song dynasty.
Nanjing or "Classic of Difficult Issues", originally called "The Yellow Emperor Eighty-one Nan Jing", ascribed to Bian Que in the eastern Han dynasty. This book was compiled in the form of question-and-answer explanations. A total of 81 questions have been discussed. Therefore, it is also called "Eighty-One Nan". The book is based on basic theory and has also analyzed some disease certificates. Questions one to twenty-two is about pulse study, questions twenty-three to twenty-nine is about meridian study, questions thirty to forty-seven is related to urgent illnesses, questions forty-eight to sixty-one is related to serious diseases, questions sixty-two to sixty-eight is related to acupuncture points, and questions sixty-nine to eighty-one is related to the needlepoint methods.
The book is credited as developing its own path, while also inheriting the theories from Huangdi Neijing. The content includes physiology, pathology, diagnosis, treatment contents, and a more essential and specific discussion of pulse diagnosis. It has become one of the four classics for Chinese medicine practitioners to learn from and has impacted the medical development in China.
Shennong Ben Cao Jing is one of the earliest written medical books in China. Written during the Eastern Han dynasty between 200 and 250 CE, it was the combined effort of practitioners in the Qin and Han dynasties who summarized, collected and compiled the results of pharmacological experience during their time periods. It was the first systematic summary of Chinese herbal medicine. Most of the pharmacological theories and compatibility rules and the proposed "seven emotions and harmony" principle have played a role in the practice of medicine for thousands of years. Therefore, it has been a textbook for medical workers in modern China. The full text of Shennong Ben Cao Jing in English can be found online.
Post-Han dynasty
In the centuries that followed, several shorter books tried to summarize or systematize the contents of the Yellow Emperor's Inner Canon. The Canon of Problems (probably second century CE) tried to reconcile divergent doctrines from the Inner Canon and developed a complete medical system centered on needling therapy. The AB Canon of Acupuncture and Moxibustion (Zhenjiu jiayi jing , compiled by Huangfu Mi sometime between 256 and 282 CE) assembled a consistent body of doctrines concerning acupuncture; whereas the Canon of the Pulse (Maijing ; c. 280) presented itself as a "comprehensive handbook of diagnostics and therapy."
Around 900–1000 AD, Chinese were the first to develop a form of vaccination, known as variolation or inoculation, to prevent smallpox. Chinese physicians had realised that when healthy people were exposed to smallpox scab tissue, they had a smaller chance of being infected by the disease later on. The common methods of inoculation at the time was through crushing smallpox scabs into powder and breathing it through the nose.
Prominent medical scholars of the post-Han period included Tao Hongjing (456–536), Sun Simiao of the Sui and Tang dynasties, Zhang Jiegu (–1234), and Li Shizhen (1518–1593).
Modern history
Chinese communities under Colonial rule
Chinese communities living in colonial port cities were influenced by the diverse cultures they encountered, which also led to evolving understandings of medical practices where Chinese forms of medicine were combined with Western medical knowledge. For example, the Tung Wah Hospital was established in Hong Kong in 1869 based on the widespread rejection of Western medicine for pre-existing medical practices, although Western medicine would still be practiced in the hospital alongside Chinese medicinal practices. The Tung Wah Hospital was likely connected to another Chinese medical institution, the Kwong Wai Shiu Hospital of Singapore, which had previous community links to Tung Wah, was established for similar reasons and also provided both Western and Chinese medical care. By 1935, English-language newspapers in Colonial Singapore already used the term "Traditional Chinese Medicine" to label Chinese ethnic medical practices.
People's Republic
In 1950, Chinese Communist Party (CCP) chairman Mao Zedong announced support of traditional Chinese medicine; this was despite the fact that Mao did not personally believe in and did not use TCM, according to his personal physician Li Zhisui. In 1952, the president of the Chinese Medical Association said that, "This One Medicine, will possess a basis in modern natural sciences, will have absorbed the ancient and the new, the Chinese and the foreign, all medical achievements – and will be China's New Medicine!"
During the Cultural Revolution (1966–1976) the CCP and the government emphasized modernity, cultural identity and China's social and economic reconstruction and contrasted them to the colonial and feudal past. The government established a grassroots health care system as a step in the search for a new national identity and tried to revitalize traditional medicine and made large investments in traditional medicine to try to develop affordable medical care and public health facilities. The Ministry of Health directed health care throughout China and established primary care units. Chinese physicians trained in Western medicine were required to learn traditional medicine, while traditional healers received training in modern methods. This strategy aimed to integrate modern medical concepts and methods and revitalize appropriate aspects of traditional medicine. Therefore, traditional Chinese medicine was re-created in response to Western medicine.
In 1968, the CCP supported a new system of health care delivery for rural areas. Villages were assigned a barefoot doctor (a medical staff with basic medical skills and knowledge to deal with minor illnesses) responsible for basic medical care. The medical staff combined the values of traditional China with modern methods to provide health and medical care to poor farmers in remote rural areas. The barefoot doctors became a symbol of the Cultural Revolution, for the introduction of modern medicine into villages where traditional Chinese medicine services were used.
The State Intellectual Property Office (now known as CNIPA) established a database of patents granted for traditional Chinese medicine.
In the second decade of the twenty-first century, Chinese Communist Party general secretary Xi Jinping strongly supported TCM, calling it a "gem". As of May 2011, in order to promote TCM worldwide, China had signed TCM partnership agreements with over 70 countries. His government pushed to increase its use and the number of TCM-trained doctors and announced that students of TCM would no longer be required to pass examinations in Western medicine. Chinese scientists and researchers, however, expressed concern that TCM training and therapies would receive equal support with Western medicine. They also criticized a reduction in government testing and regulation of the production of TCMs, some of which were toxic. Government censors have removed Internet posts that question TCM. In 2020 Beijing drafted a local regulation outlawing criticism of TCM. According to Caixin, the regulation was later passed with the provision outlawing criticism of TCM removed.
Hong Kong
At the beginning of Hong Kong's opening up, Western medicine was not yet popular, and Western medicine doctors were mostly foreigners; local residents mostly relied on Chinese medicine practitioners. In 1841, the British government of Hong Kong issued an announcement pledging to govern Hong Kong residents in accordance with all the original rituals, customs and private legal property rights. As traditional Chinese medicine had always been used in China, the use of traditional Chinese medicine was not regulated.
The establishment in 1870 of the Tung Wah Hospital was the first use of Chinese medicine for the treatment in Chinese hospitals providing free medical services. As the promotion of Western medicine by the British government started from 1940, Western medicine started being popular among Hong Kong population. In 1959, Hong Kong had researched the use of traditional Chinese medicine to replace Western medicine.
Historiography of Chinese medicine
Historians have noted two key aspects of Chinese medical history: understanding conceptual differences when translating the term , and observing the history from the perspective of cosmology rather than biology.
In Chinese classical texts, the term is the closest historical translation to the English word "body" because it sometimes refers to the physical human body in terms of being weighed or measured, but the term is to be understood as an "ensemble of functions" encompassing both the human psyche and emotions. This concept of the human body is opposed to the European duality of a separate mind and body. It is critical for scholars to understand the fundamental differences in concepts of the body in order to connect the medical theory of the classics to the "human organism" it is explaining.
Chinese scholars established a correlation between the cosmos and the "human organism". The basic components of cosmology, qi, yin yang and the Five Phase theory, were used to explain health and disease in texts such as Huangdi neijing. Yin and yang are the changing factors in cosmology, with qi as the vital force or energy of life. The Five Phase theory (Wuxing) of the Han dynasty contains the elements wood, fire, earth, metal, and water. By understanding medicine from a cosmology perspective, historians better understand Chinese medical and social classifications such as gender, which was defined by a domination or remission of yang in terms of yin.
These two distinctions are imperative when analyzing the history of traditional Chinese medical science.
A majority of Chinese medical history written after the classical canons comes in the form of primary source case studies where academic physicians record the illness of a particular person and the healing techniques used, as well as their effectiveness. Historians have noted that Chinese scholars wrote these studies instead of "books of prescriptions or advice manuals;" in their historical and environmental understanding, no two illnesses were alike so the healing strategies of the practitioner was unique every time to the specific diagnosis of the patient. Medical case studies existed throughout Chinese history, but "individually authored and published case history" was a prominent creation of the Ming dynasty. An example such case studies would be the literati physician, Cheng Congzhou, collection of 93 cases published in 1644.
Critique
Historians of science have developed the study of medicine in traditional China into a field with its own scholarly associations, journals, graduate programs, and debates with each other. Many distinguish "medicine in traditional China" from the recent traditional Chinese medicine (TCM), which took elements from traditional texts and practices to construct a systematic body. Paul Unschuld, for instance, sees a "departure of TCM from its historical origins." What is called "Traditional Chinese Medicine" and practiced today in China and the West is not thousands of years old, but recently constructed using selected traditional terms, some of which have been taken out of context, some badly misunderstood. He has criticized Chinese and Western popular books for selective use of evidence, choosing only those works or parts of historical works that seem to lead to modern medicine, ignoring those elements that do not now seem to be effective.
Critics say that TCM theory and practice have no basis in modern science, and TCM practitioners do not agree on what diagnosis and treatments should be used for any given person. A 2007 editorial in the journal Nature wrote that TCM "remains poorly researched and supported, and most of its treatments have no logical mechanism of action." It also described TCM as "fraught with pseudoscience". A review of the literature in 2008 found that scientists are "still unable to find a shred of evidence" according to standards of science-based medicine for traditional Chinese concepts such as qi, meridians, and acupuncture points, and that the traditional principles of acupuncture are deeply flawed. "Acupuncture points and meridians are not a reality", the review continued, but "merely the product of an ancient Chinese philosophy". In June 2019, the World Health Organization included traditional Chinese medicine in a global diagnostic compendium, but a spokesman said this was "not an endorsement of the scientific validity of any Traditional Medicine practice or the efficacy of any Traditional Medicine intervention."
A 2012 review of cost-effectiveness research for TCM found that studies had low levels of evidence, with no beneficial outcomes. Pharmaceutical research on the potential for creating new drugs from traditional remedies has few successful results. Proponents suggest that research has so far missed key features of the art of TCM, such as unknown interactions between various ingredients and complex interactive biological systems. One of the basic tenets of TCM is that the body's qi (sometimes translated as vital energy) is circulating through channels called meridians having branches connected to bodily organs and functions. The concept of vital energy is pseudoscientific. Concepts of the body and of disease used in TCM reflect its ancient origins and its emphasis on dynamic processes over material structure, similar to Classical humoral theory.
TCM has also been controversial within China. In 2006, the Chinese philosopher Zhang Gongyao triggered a national debate with an article entitled "Farewell to Traditional Chinese Medicine", arguing that TCM was a pseudoscience that should be abolished in public healthcare and academia. The Chinese government took the stance that TCM is a science and continued to encourage its development.
There are concerns over a number of potentially toxic plants, animal parts, and mineral Chinese compounds, as well as the facilitation of disease. Trafficked and farm-raised animals used in TCM are a source of several fatal zoonotic diseases. There are additional concerns over the illegal trade and transport of endangered species including rhinoceroses and tigers, and the welfare of specially farmed animals, including bears.
Philosophical background
Traditional Chinese medicine (TCM) is a broad range of medicine practices sharing common concepts which have been developed in China and are based on a tradition of more than 2,000 years, including various forms of herbal medicine, acupuncture, massage (), exercise (), and dietary therapy. It is primarily used as a complementary alternative medicine approach. TCM is widely used in China and it is also used in the West. Its philosophy is based on Yinyangism (i.e., the combination of Five Phases theory with Yin–Yang theory), which was later absorbed by Daoism. Philosophical texts influenced TCM, mostly by being grounded in the same theories of qi, yin-yang and wuxing and microcosm-macrocosm analogies.
Yin and yang
Yin and yang are ancient Chinese deductive reasoning concepts used within Chinese medical diagnosis which can be traced back to the Shang dynasty (1600–1100 BCE). They represent two abstract and complementary aspects that every phenomenon in the universe can be divided into. Primordial analogies for these aspects are the sun-facing (yang) and the shady (yin) side of a hill. Two other commonly used representational allegories of yin and yang are water and fire. In the yin–yang theory, detailed attributions are made regarding the yin or yang character of things:
The concept of yin and yang is also applicable to the human body; for example, the upper part of the body and the back are assigned to yang, while the lower part of the body is believed to have the yin character. Yin and yang characterization also extends to the various body functions, and – more importantly – to disease symptoms (e.g., cold and heat sensations are assumed to be yin and yang symptoms, respectively). Thus, yin and yang of the body are seen as phenomena whose lack (or over-abundance) comes with characteristic symptom combinations:
Yin vacuity (also termed "vacuity-heat"): heat sensations, possible sweating at night, insomnia, dry pharynx, dry mouth, dark urine, and a "fine" and rapid pulse.
Yang vacuity ("vacuity-cold"): aversion to cold, cold limbs, bright white complexion, long voidings of clear urine, diarrhea, pale and enlarged tongue, and a slightly weak, slow and fine pulse.
TCM also identifies drugs believed to treat these specific symptom combinations, i.e., to reinforce yin and yang.
Strict rules are identified to apply to the relationships between the Five Phases in terms of sequence, of acting on each other, of counteraction, etc. All these aspects of Five Phases theory constitute the basis of the zàng-fǔ concept, and thus have great influence regarding the TCM model of the body. Five Phase theory is also applied in diagnosis and therapy.
Correspondences between the body and the universe have historically not only been seen in terms of the Five Elements, but also of the "Great Numbers" () For example, the number of acu-points has at times been seen to be 365, corresponding with the number of days in a year; and the number of main meridians–12–has been seen as corresponding with the number of rivers flowing through the ancient Chinese empire.
Model of the body
TCM "holds that the body's vital energy (chi or qi) circulates through channels, called meridians, that have branches connected to bodily organs and functions." Its view of the human body is only marginally concerned with anatomical structures, but focuses primarily on the body's functions (such as digestion, breathing, temperature maintenance, etc.):
These functions are aggregated and then associated with a primary functional entity – for instance, nourishment of the tissues and maintenance of their moisture are seen as connected functions, and the entity postulated to be responsible for these functions is xiě (blood). These functional entities thus constitute concepts rather than something with biochemical or anatomical properties.
The primary functional entities used by traditional Chinese medicine are qì, xuě, the five zàng organs, the six fǔ organs, and the meridians which extend through the organ systems. These are all theoretically interconnected: each zàng organ is paired with a fǔ organ, which are nourished by the blood and concentrate qi for a particular function, with meridians being extensions of those functional systems throughout the body.
Concepts of the body and of disease used in TCM are pseudoscientific, similar to Mediterranean humoral theory. TCM's model of the body is characterized as full of pseudoscience. Some practitioners no longer consider yin and yang and the idea of an energy flow to apply. Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as qi, meridians, and acupuncture points. It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. The scientific evidence for the anatomical existence of either meridians or acupuncture points is not compelling. Stephen Barrett of Quackwatch writes that, "TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care."
Qi
Qi is a polysemous word that traditional Chinese medicine distinguishes as being able to transform into many different qualities of qi (). In a general sense, qi is something that is defined by five "cardinal functions":
Actuation () – of all physical processes in the body, especially the circulation of all body fluids such as blood in their vessels. This includes actuation of the functions of the zang-fu organs and meridians.
Warming () – the body, especially the limbs.
Defense () – against Exogenous Pathogenic Factors
Containment () – of body fluids, i.e., keeping blood, sweat, urine, semen, etc. from leakage or excessive emission.
Inter-transformationel () – of food, drink, and breath into qi, xue (blood), and jinye ("fluids"), and/or transformation of all of the latter into each other.
A lack of qi will be characterized especially by pale complexion, lassitude of spirit, lack of strength, spontaneous sweating, laziness to speak, non-digestion of food, shortness of breath (especially on exertion), and a pale and enlarged tongue.
Qi is believed to be partially generated from food and drink, and partially from air (by breathing). Another considerable part of it is inherited from the parents and will be consumed in the course of life.
TCM uses special terms for qi running inside of the blood vessels and for qi that is distributed in the skin, muscles, and tissues between them. The former is called yingqi (); its function is to complement xuè and its nature has a strong yin aspect (although qi in general is considered to be yang). The latter is called weiqi (); its main function is defence and it has pronounced yang nature.
Qi is said to circulate in the meridians. Just as the qi held by each of the zang-fu organs, this is considered to be part of the 'principal' qi of the body.
Xie
In contrast to the majority of other functional entities, or (, "blood") is correlated with a physical form – the red liquid running in the blood vessels. Its concept is, nevertheless, defined by its functions: nourishing all parts and tissues of the body, safeguarding an adequate degree of moisture, and sustaining and soothing both consciousness and sleep.
Typical symptoms of a lack of (usually termed "blood vacuity" []) are described as: Pale-white or withered-yellow complexion, dizziness, flowery vision, palpitations, insomnia, numbness of the extremities; pale tongue; "fine" pulse.
Jinye
Closely related to xuě are the jinye (, usually translated as "body fluids"), and just like xuě they are considered to be yin in nature, and defined first and foremost by the functions of nurturing and moisturizing the different structures of the body. Their other functions are to harmonize yin and yang, and to help with the secretion of waste products.
Jinye are ultimately extracted from food and drink, and constitute the raw material for the production of xuě; conversely, xuě can also be transformed into jinye. Their palpable manifestations are all bodily fluids: tears, sputum, saliva, gastric acid, joint fluid, sweat, urine, etc.
Zangfu
The zangfu () are the collective name of eleven entities (similar to organs) that constitute the centre piece of TCM's systematization of bodily functions. The term zang refers to the five considered to be yin in nature – Heart, Liver, Spleen, Lung, Kidney – while fu refers to the six associated with yang – Small Intestine, Large Intestine, Gallbladder, Urinary Bladder, Stomach and San Jiao. Despite having the names of organs, they are only loosely tied to (rudimentary) anatomical assumptions. Instead, they are primarily understood to be certain "functions" of the body. To highlight the fact that they are not equivalent to anatomical organs, their names are usually capitalized.
The zangs essential functions consist in production and storage of qi and xuě; they are said to regulate digestion, breathing, water metabolism, the musculoskeletal system, the skin, the sense organs, aging, emotional processes, and mental activity, among other structures and processes. The fǔ organs' main purpose is merely to transmit and digest () substances such as waste and food.
Since their concept was developed on the basis of Wǔ Xíng philosophy, each zàng is paired with a fǔ, and each zàng-fǔ pair is assigned to one of five elemental qualities (i.e., the Five Elements or Five Phases). These correspondences are stipulated as:
Fire () = Heart () and Small Intestine () (and, secondarily, Sānjiaō [, "Triple Burner"] and Pericardium [])
Earth () = Spleen () and Stomach ()
Metal () = Lung () and Large Intestine ()
Water () = Kidney () and Bladder ()
Wood () = Liver () and Gallbladder ()
The zàng-fǔ are also connected to the twelve standard meridians – each yang meridian is attached to a fǔ organ, and five of the yin meridians are attached to a zàng. As there are only five zàng but six yin meridians, the sixth is assigned to the Pericardium, a peculiar entity almost similar to the Heart zàng.
Jing-luo
The meridians (, ) are believed to be channels running from the zàng-fǔ in the interior (, ) of the body to the limbs and joints ("the surface" [, ]), transporting qi and xuĕ. TCM identifies 12 "regular" and 8 "extraordinary" meridians; the Chinese terms being (, lit. "the Twelve Vessels") and () respectively. There's also a number of less customary channels branching from the "regular" meridians.
Gender in traditional medicine
Fuke () is the traditional Chinese term for women's medicine (it means gynecology and obstetrics in modern medicine). However, there are few or no ancient works on it except for Fu Qingzhu's Fu Qingzhu Nu Ke (Fu Qingzhu's Gynecology). In traditional China, as in many other cultures, the health and medicine of female bodies was less understood than that of male bodies. Women's bodies were often secondary to male bodies, since women were thought of as the weaker, sicklier sex.
In clinical encounters, women and men were treated differently. Diagnosing women was not as simple as diagnosing men. First, when a woman fell ill, an appropriate adult man was to call the doctor and remain present during the examination, for the woman could not be left alone with the doctor. The physician would discuss the female's problems and diagnosis only through the male. However, in certain cases, when a woman dealt with complications of pregnancy or birth, older women assumed the role of the formal authority. Men in these situations would not have much power to interfere. Second, women were often silent about their issues with doctors due to the societal expectation of female modesty when a male figure was in the room. Third, patriarchal society also caused doctors to call women and children patients "the anonymous category of family members (Jia Ren) or household (Ju Jia)" in their journals. This anonymity and lack of conversation between the doctor and woman patient led to the inquiry diagnosis of the Four Diagnostic Methods being the most challenging. Doctors used a medical doll known as a Doctor's lady, on which female patients could indicate the location of their symptoms.
Cheng Maoxian (b. 1581), who practiced medicine in Yangzhou, described the difficulties doctors had with the norm of female modesty. One of his case studies was that of Fan Jisuo's teenage daughter, who could not be diagnosed because she was unwilling to speak about her symptoms, since the illness involved discharge from her intimate areas. As Cheng describes, there were four standard methods of diagnosis – looking, asking, listening and smelling and touching (for pulse-taking). To maintain some form of modesty, women would often stay hidden behind curtains and screens. The doctor was allowed to touch enough of her body to complete his examination, often just the pulse taking. This would lead to situations where the symptoms and the doctor's diagnosis did not agree and the doctor would have to ask to view more of the patient.
These social and cultural beliefs were often barriers to learning more about female health, with women themselves often being the most formidable barrier. Women were often uncomfortable talking about their illnesses, especially in front of the male chaperones that attended medical examinations. Women would choose to omit certain symptoms as a means of upholding their chastity and honor. One such example is the case in which a teenage girl was unable to be diagnosed because she failed to mention her symptom of vaginal discharge. Silence was their way of maintaining control in these situations, but it often came at the expense of their health and the advancement of female health and medicine. This silence and control were most obviously seen when the health problem was related to the core of Ming fuke, or the sexual body. It was often in these diagnostic settings that women would choose silence. In addition, there would be a conflict between patient and doctor on the probability of her diagnosis. For example, a woman who thought herself to be past the point of child-bearing age, might not believe a doctor who diagnoses her as pregnant. This only resulted in more conflict.
Yin yang and gender
Yin and yang were critical to the understanding of women's bodies, but understood only in conjunction with male bodies. Yin and yang ruled the body, the body being a microcosm of the universe and the earth. In addition, gender in the body was understood as homologous, the two genders operating in synchronization. Gender was presumed to influence the movement of energy and a well-trained physician would be expected to read the pulse and be able to identify two dozen or more energy flows. Yin and yang concepts were applied to the feminine and masculine aspects of all bodies, implying that the differences between men and women begin at the level of this energy flow. According to Bequeathed Writings of Master Chu the male's yang pulse movement follows an ascending path in "compliance [with cosmic direction] so that the cycle of circulation in the body and the Vital Gate are felt...The female's yin pulse movement follows a defending path against the direction of cosmic influences, so that the nadir and the Gate of Life are felt at the inch position of the left hand". In sum, classical medicine marked yin and yang as high and low on bodies which in turn would be labeled normal or abnormal and gendered either male or female.
Bodily functions could be categorized through systems, not organs. In many drawings and diagrams, the twelve channels and their visceral systems were organized by yin and yang, an organization that was identical in female and male bodies. Female and male bodies were no different on the plane of yin and yang. Their gendered differences were not acknowledged in diagrams of the human body. Medical texts such as the Yuzuan yizong jinjian were filled with illustrations of male bodies or androgynous bodies that did not display gendered characteristics.
As in other cultures, fertility and menstruation dominate female health concerns. Since male and female bodies were governed by the same forces, traditional Chinese medicine did not recognize the womb as the place of reproduction. The abdominal cavity presented pathologies that were similar in both men and women, which included tumors, growths, hernias, and swellings of the genitals. The "master system", as Charlotte Furth calls it, is the kidney visceral system, which governed reproductive functions. Therefore, it was not the anatomical structures that allowed for pregnancy, but the difference in processes that allowed for the condition of pregnancy to occur.
Pregnancy
Traditional Chinese medicine's dealings with pregnancy are documented from at least the seventeenth century. According to Charlotte Furth, "a pregnancy (in the seventeenth century) as a known bodily experience emerged [...] out of the liminality of menstrual irregularity, as uneasy digestion, and a sense of fullness". These symptoms were common among other illness as well, so the diagnosis of pregnancy often came late in the term. The Canon of the Pulse, which described the use of pulse in diagnosis, stated that pregnancy was "a condition marked by symptoms of the disorder in one whose pulse is normal" or "where the pulse and symptoms do not agree". Women were often silent about suspected pregnancy, which led to many men not knowing that their wife or daughter was pregnant until complications arrived. Complications through the misdiagnosis and the woman's reluctance to speak often led to medically induced abortions. Cheng, Furth wrote, "was unapologetic about endangering a fetus when pregnancy risked a mother's well being". The method of abortion was the ingestion of certain herbs and foods. Disappointment at the loss of the fetus often led to family discord.
Postpartum
If the baby and mother survived the term of the pregnancy, childbirth was then the next step. The tools provided for birth were: towels to catch the blood, a container for the placenta, a pregnancy sash to support the belly, and an infant swaddling wrap. With these tools, the baby was born, cleaned, and swaddled; however, the mother was then immediately the focus of the doctor to replenish her qi. In his writings, Cheng places a large amount of emphasis on the Four Diagnostic methods to deal with postpartum issues and instructs all physicians to "not neglect any [of the four methods]". The process of birthing was thought to deplete a woman's blood level and qi so the most common treatments for postpartum were food (commonly garlic and ginseng), medicine, and rest. This process was followed up by a month check-in with the physician, a practice known as zuo yuezi.
Infertility
Infertility, not very well understood, posed serious social and cultural repercussions. The seventh-century scholar Sun Simiao is often quoted: "those who have prescriptions for women's distinctiveness take their differences of pregnancy, childbirth and [internal] bursting injuries as their basis." Even in contemporary fuke placing emphasis on reproductive functions, rather than the entire health of the woman, suggests that the main function of fuke is to produce children.
Once again, the kidney visceral system governs the "source Qi", which governs the reproductive systems in both sexes. This source Qi was thought to "be slowly depleted through sexual activity, menstruation and childbirth." It was also understood that the depletion of source Qi could result from the movement of an external pathology that moved through the outer visceral systems before causing more permanent damage to the home of source Qi, the kidney system. In addition, the view that only very serious ailments ended in the damage of this system means that those who had trouble with their reproductive systems or fertility were seriously ill.
According to traditional Chinese medical texts, infertility can be summarized into different syndrome types. These were spleen and kidney depletion (yang depletion), liver and kidney depletion (yin depletion), blood depletion, phlegm damp, liver oppression, and damp heat. This is important because, while most other issues were complex in Chinese medical physiology, women's fertility issues were simple. Most syndrome types revolved around menstruation, or lack thereof. The patient was entrusted with recording not only the frequency, but also the "volume, color, consistency, and odor of menstrual flow." This placed responsibility of symptom recording on the patient, and was compounded by the earlier discussed issue of female chastity and honor. This meant that diagnosing female infertility was difficult, because the only symptoms that were recorded and monitored by the physician were the pulse and color of the tongue.
Concept of disease
In general, disease is perceived as a disharmony (or imbalance) in the functions or interactions of yin, yang, qi, xuĕ, zàng-fǔ, meridians etc. and/or of the interaction between the human body and the environment. Therapy is based on which "pattern of disharmony" can be identified. Thus, "pattern discrimination" is the most important step in TCM diagnosis. It is also known to be the most difficult aspect of practicing TCM.
To determine which pattern is at hand, practitioners will examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing or the sound of the voice. For example, depending on tongue and pulse conditions, a TCM practitioner might diagnose bleeding from the mouth and nose as: "Liver fire rushes upwards and scorches the Lung, injuring the blood vessels and giving rise to reckless pouring of blood from the mouth and nose." He might then go on to prescribe treatments designed to clear heat or supplement the Lung.
Disease entities
In TCM, a disease has two aspects: "bìng" and "zhèng". The former is often translated as "disease entity", "disease category", "illness", or simply "diagnosis". The latter, and more important one, is usually translated as "pattern" (or sometimes also as "syndrome"). For example, the disease entity of a common cold might present with a pattern of wind-cold in one person, and with the pattern of wind-heat in another.
From a scientific point of view, most of the disease entities () listed by TCM constitute symptoms. Examples include headache, cough, abdominal pain, constipation etc.
Since therapy will not be chosen according to the disease entity but according to the pattern, two people with the same disease entity but different patterns will receive different therapy. Vice versa, people with similar patterns might receive similar therapy even if their disease entities are different. This is called yì bìng tóng zhì, tóng bìng yì zhì ().
Patterns
In TCM, "pattern" () refers to a "pattern of disharmony" or "functional disturbance" within the functional entities of which the TCM model of the body is composed. There are disharmony patterns of qi, xuě, the body fluids, the zàng-fǔ, and the meridians. They are ultimately defined by their symptoms and signs (i.e., for example, pulse and tongue findings).
In clinical practice, the identified pattern usually involves a combination of affected entities (compare with typical examples of patterns). The concrete pattern identified should account for all the symptoms a person has.
Six Excesses
The Six Excesses (, sometimes also translated as "Pathogenic Factors", or "Six Pernicious Influences"; with the alternative term of , – "Six Evils" or "Six Devils") are allegorical terms used to describe disharmony patterns displaying certain typical symptoms. These symptoms resemble the effects of six climatic factors. In the allegory, these symptoms can occur because one or more of those climatic factors (called , "the six qi") were able to invade the body surface and to proceed to the interior. This is sometimes used to draw causal relationships (i.e., prior exposure to wind/cold/etc. is identified as the cause of a disease), while other authors explicitly deny a direct cause-effect relationship between weather conditions and disease, pointing out that the Six Excesses are primarily descriptions of a certain combination of symptoms translated into a pattern of disharmony. It is undisputed, though, that the Six Excesses can manifest inside the body without an external cause. In this case, they might be denoted "internal", e.g., "internal wind" or "internal fire (or heat)".
The Six Excesses and their characteristic clinical signs are:
Wind (): rapid onset of symptoms, wandering location of symptoms, itching, nasal congestion, "floating" pulse; tremor, paralysis, convulsion.
Cold (): cold sensations, aversion to cold, relief of symptoms by warmth, watery/clear excreta, severe pain, abdominal pain, contracture/hypertonicity of muscles, (slimy) white tongue fur, "deep"/"hidden" or "string-like" pulse, or slow pulse.
Fire/Heat (): aversion to heat, high fever, thirst, concentrated urine, red face, red tongue, yellow tongue fur, rapid pulse. (Fire and heat are basically seen to be the same)
Dampness (): sensation of heaviness, sensation of fullness, symptoms of Spleen dysfunction, greasy tongue fur, "slippery" pulse.
Dryness (): dry cough, dry mouth, dry throat, dry lips, nosebleeds, dry skin, dry stools.
Summerheat (): either heat or mixed damp-heat symptoms.
Six-Excesses-patterns can consist of only one or a combination of Excesses (e.g., wind-cold, wind-damp-heat). They can also transform from one into another.
Typical examples of patterns
For each of the functional entities (qi, xuĕ, zàng-fǔ, meridians etc.), typical disharmony patterns are recognized; for example: qi vacuity and qi stagnation in the case of qi; blood vacuity, blood stasis, and blood heat in the case of xuĕ; Spleen qi vacuity, Spleen yang vacuity, Spleen qi vacuity with down-bearing qi, Spleen qi vacuity with lack of blood containment, cold-damp invasion of the Spleen, damp-heat invasion of Spleen and Stomach in case of the Spleen zàng; wind/cold/damp invasion in the case of the meridians.
TCM gives detailed prescriptions of these patterns regarding their typical symptoms, mostly including characteristic tongue and/or pulse findings. For example:
"Upflaming Liver fire" (): Headache, red face, reddened eyes, dry mouth, nosebleeds, constipation, dry or hard stools, profuse menstruation, sudden tinnitus or deafness, vomiting of sour or bitter fluids, expectoration of blood, irascibility, impatience; red tongue with dry yellow fur; slippery and string-like pulse.
Eight principles of diagnosis
The process of determining which actual pattern is on hand is called (, usually translated as "pattern diagnosis", "pattern identification" or "pattern discrimination"). Generally, the first and most important step in pattern diagnosis is an evaluation of the present signs and symptoms on the basis of the "Eight Principles" (). These eight principles refer to four pairs of fundamental qualities of a disease: exterior/interior, heat/cold, vacuity/repletion, and yin/yang. Out of these, heat/cold and vacuity/repletion have the biggest clinical importance. The yin/yang quality, on the other side, has the smallest importance and is somewhat seen aside from the other three pairs, since it merely presents a general and vague conclusion regarding what other qualities are found. In detail, the Eight Principles refer to the following:
Yin and yang are universal aspects all things can be classified under, this includes diseases in general as well as the Eight Principles' first three couples. For example, cold is identified to be a yin aspect, while heat is attributed to yang. Since descriptions of patterns in terms of yin and yang lack complexity and clinical practicality, though, patterns are usually not labeled this way anymore. Exceptions are vacuity-cold and repletion-heat patterns, who are sometimes referred to as "yin patterns" and "yang patterns" respectively.
Exterior () refers to a disease manifesting in the superficial layers of the body – skin, hair, flesh, and meridians. It is characterized by aversion to cold and/or wind, headache, muscle ache, mild fever, a "floating" pulse, and a normal tongue appearance.
Interior () refers to disease manifestation in the zàng-fǔ, or (in a wider sense) to any disease that can not be counted as exterior. There are no generalized characteristic symptoms of interior patterns, since they'll be determined by the affected zàng or fǔ entity.
Cold () is generally characterized by aversion to cold, absence of thirst, and a white tongue fur. More detailed characterization depends on whether cold is coupled with vacuity or repletion.
Heat () is characterized by an absence of aversion to cold, a red and painful throat, a dry tongue fur and a rapid and floating pulse if it falls together with an exterior pattern. In all other cases, symptoms depend on whether heat is coupled with vacuity or repletion.
Deficiency (), can be further differentiated into deficiency of qi, xuě, yin and yang, with all their respective characteristic symptoms. Yin deficiency can also cause "empty-heat".
Excess () generally refers to any disease that cannot be identified as a deficient pattern, and usually indicates the presence of one of the Six Excesses, or a pattern of stagnation (of qi, xuě, etc.). In a concurrent exterior pattern, excess is characterized by the absence of sweating.
After the fundamental nature of a disease in terms of the Eight Principles is determined, the investigation focuses on more specific aspects. By evaluating the present signs and symptoms against the background of typical disharmony patterns of the various entities, evidence is collected whether or how specific entities are affected. This evaluation can be done
in respect of the meridians ()
in respect of qi ()
in respect of xuè ()
in respect of the body fluids ()
in respect of the zàng-fǔ () – very similar to this, though less specific, is disharmony pattern description in terms of the Five Elements [])
There are also three special pattern diagnosis systems used in case of febrile and infectious diseases only ("Six Channel system" or "six division pattern" []; "Wei Qi Ying Xue system" or "four division pattern" []; "San Jiao system" or "three burners pattern" []).
Considerations of disease causes
Although TCM and its concept of disease do not strongly differentiate between cause and effect, pattern discrimination can include considerations regarding the disease cause; this is called (, "disease-cause pattern discrimination").
There are three fundamental categories of disease causes () recognized:
external causes: these include the Six Excesses and "Pestilential Qi".
internal causes: the "Seven Affects" (, sometimes also translated as "Seven Emotions") – joy, anger, brooding, sorrow, fear, fright and grief. These are believed to be able to cause damage to the functions of the zàng-fú, especially of the Liver.
non-external-non-internal causes: dietary irregularities (especially: too much raw, cold, spicy, fatty or sweet food; voracious eating; too much alcohol), fatigue, sexual intemperance, trauma, and parasites ().
Diagnostics
In TCM, there are five major diagnostic methods: inspection, auscultation, olfaction, inquiry, and palpation. These are grouped into what is known as the "Four pillars" of diagnosis, which are Inspection, Auscultation/ Olfaction, Inquiry, and Palpation ().
Inspection focuses on the face and particularly on the tongue, including analysis of the tongue size, shape, tension, color and coating, and the absence or presence of teeth marks around the edge.
Auscultation refers to listening for particular sounds (such as wheezing).
Olfaction refers to attending to body odor.
Inquiry focuses on the "seven inquiries", which involve asking the person about the regularity, severity, or other characteristics of: chills, fever, perspiration, appetite, thirst, taste, defecation, urination, pain, sleep, menses, leukorrhea.
Palpation which includes feeling the body for tender A-shi points, and the palpation of the wrist pulses as well as various other pulses, and palpation of the abdomen.
Tongue and pulse
Examination of the tongue and the pulse are among the principal diagnostic methods in TCM. Details of the tongue, including shape, size, color, texture, cracks, teeth marks, as well as tongue coating are all considered as part of tongue diagnosis. Various regions of the tongue's surface are believed to correspond to the zàng-fŭ organs. For example, redness on the tip of the tongue might indicate heat in the Heart, while redness on the sides of the tongue might indicate heat in the Liver.
Pulse palpation involves measuring the pulse both at a superficial and at a deep level at three different locations on the radial artery (Cun, Guan, Chi, located two fingerbreadths from the wrist crease, one fingerbreadth from the wrist crease, and right at the wrist crease, respectively, usually palpated with the index, middle and ring finger) of each arm, for a total of twelve pulses, all of which are thought to correspond with certain zàng-fŭ. The pulse is examined for several characteristics including rhythm, strength and volume, and described with qualities like "floating, slippery, bolstering-like, feeble, thready and quick"; each of these qualities indicates certain disease patterns. Learning TCM pulse diagnosis can take several years.
Herbal medicine
The term "herbal medicine" is somewhat misleading in that, while plant elements are by far the most commonly used substances in TCM, other, non-botanic substances are used as well: animal, human, fungi, and mineral products are also used. Thus, the term "medicinal" (instead of herb) may be used. A 2019 review of traditional herbal treatments found they are widely used but lacking in scientific evidence, and urged a more rigorous approach by which genuinely useful medicinals might be identified.
Raw materials
There are roughly 13,000 compounds used in China and over 100,000 TCM recipes recorded in the ancient literature. Plant elements and extracts are by far the most common elements used. In the classic Handbook of Traditional Drugs from 1941, 517 drugs were listed – out of these, 45 were animal parts, and 30 were minerals.
Animal substances
Some animal parts used include cow gallstones, hornet nests, leeches, and scorpion. Other examples of animal parts include horn of the antelope or buffalo, deer antlers, testicles and penis bone of the dog, and snake bile. Some TCM textbooks still recommend preparations containing animal tissues, but there has been little research to justify the claimed clinical efficacy of many TCM animal products.
Some compounds can include the parts of endangered species, including tiger bones and rhinoceros horn
which is used for many ailments (though not as an aphrodisiac as is commonly misunderstood in the West).
The black market in rhinoceros horns (driven not just by TCM but also unrelated status-seeking) has reduced the world's rhino population by more than 90 percent over the past 40 years.
Concerns have also arisen over the use of pangolin scales, turtle plastron, seahorses, and the gill plates of mobula and manta rays.
Poachers hunt restricted or endangered species to supply the black market with TCM products. There is no scientific evidence of efficacy for tiger medicines. Concern over China considering to legalize the trade in tiger parts prompted the 171-nation Convention on International Trade in Endangered Species (CITES) to endorse a decision opposing the resurgence of trade in tigers. Fewer than 30,000 saiga antelopes remain, which are exported to China for use in traditional fever therapies. Organized gangs illegally export the horn of the antelopes to China. The pressures on seahorses (Hippocampus spp.) used in traditional medicine is enormous; tens of millions of animals are unsustainably caught annually. Many species of syngnathid are currently part of the IUCN Red List of Threatened Species or national equivalents.
Since TCM recognizes bear bile as a treatment compound, more than 12,000 asiatic black bears are held in bear farms. The bile is extracted through a permanent hole in the abdomen leading to the gall bladder, which can cause severe pain. This can lead to bears trying to kill themselves. As of 2012, approximately 10,000 bears are farmed in China for their bile. This practice has spurred public outcry across the country. The bile is collected from live bears via a surgical procedure. As of March 2020 bear bile as ingredient of Tan Re Qing injection remains on the list of remedies recommended for treatment of "severe cases" of COVID-19 by National Health Commission of China and the National Administration of Traditional Chinese Medicine.
The deer penis is believed to have therapeutic benefits according to traditional Chinese medicine. Tiger parts from poached animals include tiger penis, believed to improve virility, and tiger eyes. The illegal trade for tiger parts in China has driven the species to near-extinction because of its popularity in traditional medicine. Laws protecting even critically endangered species such as the Sumatran tiger fail to stop the display and sale of these items in open markets. Shark fin soup is traditionally regarded in Chinese medicine as beneficial for health in East Asia, and its status as an elite dish has led to huge demand with the increase of affluence in China, devastating shark populations. The shark fins have been a part of traditional Chinese medicine for centuries. Shark finning is banned in many countries, but the trade is thriving in Hong Kong and China, where the fins are part of shark fin soup, a dish considered a delicacy, and used in some types of traditional Chinese medicine.
The tortoise (freshwater turtle, guiban) and turtle (Chinese softshell turtle, biejia) species used in traditional Chinese medicine are raised on farms, while restrictions are made on the accumulation and export of other endangered species. However, issues concerning the overexploitation of Asian turtles in China have not been completely solved. Australian scientists have developed methods to identify medicines containing DNA traces of endangered species. Finally, although not an endangered species, sharp rises in exports of donkeys and donkey hide from Africa to China to make the traditional remedy ejiao have prompted export restrictions by some African countries.
Human body parts
Traditional Chinese medicine also includes some human parts: the classic Materia medica (Bencao Gangmu) describes (also criticizes) the use of 35 human body parts and excreta in medicines, including bones, fingernail, hairs, dandruff, earwax, impurities on the teeth, feces, urine, sweat, organs, but most are no longer in use.THE HUMAN BODY AS A NEW COMMODITY, Tsuyoshi Awaya, The Review of Tokuyama, June 1999
Human placenta has been used an ingredient in certain traditional Chinese medicines, including using dried human placenta, known as "Ziheche", to treat infertility, impotence and other conditions. The consumption of the human placenta is a potential source of infection.
Traditional categorization
The traditional categorizations and classifications that can still be found today are:
The classification according to the Four Natures (): hot, warm, cool, or cold (or, neutral in terms of temperature) and hot and warm herbs are used to treat cold diseases, while cool and cold herbs are used to treat heat diseases.
The classification according to the Five Flavors, (, sometimes also translated as Five Tastes): acrid, sweet, bitter, sour, and salty. Substances may also have more than one flavor, or none (i.e., a "bland" flavor). Each of the Five Flavors corresponds to one of zàng organs, which in turn corresponds to one of the Five Phases. A flavor implies certain properties and therapeutic actions of a substance; e.g., saltiness drains downward and softens hard masses, while sweetness is supplementing, harmonizing, and moistening.
The classification according to the meridian – more precisely, the zàng-fu organ including its associated meridian – which can be expected to be primarily affected by a given compound.
The categorization according to the specific function mainly include: exterior-releasing or exterior-resolving, heat-clearing, downward-draining, or precipitating wind-damp-dispelling, dampness-transforming, promoting the movement of water and percolating dampness or dampness-percolating, interior-warming, qi-regulating or qi-rectifying, dispersing food accumulation or food-dispersing, worm-expelling, stopping bleeding or blood-stanching, quickening the Blood and dispelling stasis or blood-quickening, transforming phlegm, stopping coughing and calming wheezing or phlegm-transforming and cough- and panting-suppressing, Spirit-quieting, calming the liver and expelling wind or liver-calming and wind-extinguishing orifice-opening supplementing which includes qi-supplementing, blood-nourishing, yin-enriching, and yang-fortifying, astriction-promoting or securing and astringing, vomiting-inducing, and substances for external application.Xu, L. & Wang, W. (2002) "Chinese materia medica: combinations and applications" Donica Publishing Ltd. 1st edition.
Efficacy
there were not enough good-quality trials of herbal therapies to allow their effectiveness to be determined. A high percentage of relevant studies on traditional Chinese medicine are in Chinese databases. Fifty percent of systematic reviews on TCM did not search Chinese databases, which could lead to a bias in the results. Many systematic reviews of TCM interventions published in Chinese journals are incomplete, some contained errors or were misleading. The herbs recommended by traditional Chinese practitioners in the US are unregulated.
A 2013 review found the data too weak to support use of Chinese herbal medicine (CHM) for benign prostatic hyperplasia.
A 2013 review found the research on the benefit and safety of CHM for idiopathic sudden sensorineural hearing loss is of poor quality and cannot be relied upon to support their use.
A 2013 Cochrane review found inconclusive evidence that CHM reduces the severity of eczema.
The traditional medicine ginger, which has shown anti-inflammatory properties in laboratory experiments, has been used to treat rheumatism, headache and digestive and respiratory issues, though there is no firm evidence supporting these uses.
A 2012 Cochrane review found no difference in mortality rate among 640 SARS patients when Chinese herbs were used alongside Western medicine versus Western medicine exclusively, although they concluded some herbs may have improved symptoms and decreased corticosteroid doses.
A 2012 Cochrane review found insufficient evidence to support the use of TCM for people with adhesive small bowel obstruction.
A 2011 review found low quality evidence that suggests CHM improves the symptoms of Sjögren's syndrome.
A 2011 Cochrane review found inconclusive evidence to support the use of TCM herbal medicines for treatment of hypercholesterolemia.
A 2011 Cochrane review did not find improvement in fasting C-peptide when compared to insulin treatment for latent autoimmune diabetes in adults after 3 months. It is important to highlight that the studies available to be included in this review presented considerable flaws in quality and design.
A 2010 review found TCM seems to be effective for the treatment of fibromyalgia but the findings were of insufficient methodological rigor.
A 2008 Cochrane review found promising evidence for the use of Chinese herbal medicine in relieving painful menstruation, but the trials assessed were of such low methodological quality that no conclusion could be drawn about the remedies' suitability as a recommendable treatment option.
Turmeric has been used in traditional Chinese medicine for centuries to treat various conditions. This includes jaundice and hepatic disorders, rheumatism, anorexia, diabetic wounds, and menstrual complications. Most of its effects have been attributed to curcumin. Research that curcumin shows strong anti-inflammatory and antioxidant activities have instigated mechanism of action studies on the possibility for cancer and inflammatory diseases prevention and treatment. It also exhibits immunomodulatory effects.
A 2005 Cochrane review found insufficient evidence for the use of CHM in HIV-infected people and people with AIDS.
A 2010 Cochrane review found insufficient evidence to support the use of Traditional Chinese Herbal Products (THCP) in the treatment of angina.
A 2010 Cochrane review found no evidence supporting the use of TCHM for stopping bleeding from haemorrhoids. There was some weak evidence of pain relief.
Drug research
With an eye to the enormous Chinese market, pharmaceutical companies have explored creating new drugs from traditional remedies. The journal Nature commented that "claims made on behalf of an uncharted body of knowledge should be treated with the customary skepticism that is the bedrock of both science and medicine."
There had been success in the 1970s, however, with the development of the antimalarial drug artemisinin, which is a processed extract of Artemisia annua, a herb traditionally used as a fever treatment. Artemisia annua has been used by Chinese herbalists in traditional Chinese medicines for 2,000 years. In 1596, Li Shizhen recommended tea made from qinghao specifically to treat malaria symptoms in his Compendium of Materia Medica. Researcher Tu Youyou discovered that a low-temperature extraction process could isolate an effective antimalarial substance from the plant. Tu says she was influenced by a traditional Chinese herbal medicine source, The Handbook of Prescriptions for Emergency Treatments, written in 340 by Ge Hong, which states that this herb should be steeped in cold water. The extracted substance, once subject to detoxification and purification processes, is a usable antimalarial drug – a 2012 review found that artemisinin-based remedies were the most effective drugs for the treatment of malaria. For her work on malaria, Tu received the 2015 Nobel Prize in Physiology or Medicine. Despite global efforts in combating malaria, it remains a large burden for the population. Although WHO recommends artemisinin-based remedies for treating uncomplicated malaria, resistance to the drug can no longer be ignored.
Also in the 1970s Chinese researcher Zhang TingDong and colleagues investigated the potential use of the traditionally used substance arsenic trioxide to treat acute promyelocytic leukemia (APL). Building on his work, research both in China and the West eventually led to the development of the drug Trisenox, which was approved for leukemia treatment by the FDA in 2000.
Huperzine A, an extract from the herb, Huperzia serrata, is under preliminary research as a possible therapeutic for Alzheimer's disease, but poor methodological quality of the research restricts conclusions about its effectiveness.
Ephedrine in its natural form, known as má huáng () in TCM, has been documented in China since the Han dynasty (206 BCE – 220 CE) as an antiasthmatic and stimulant. In 1885, the chemical synthesis of ephedrine was first accomplished by Japanese organic chemist Nagai Nagayoshi based on his research on Japanese and Chinese traditional herbal medicines
Pien tze huang was first documented in the Ming dynasty.
Cost-effectiveness
A 2012 systematic review found there is a lack of available cost-effectiveness evidence in TCM.
Safety
From the earliest records regarding the use of compounds to today, the toxicity of certain substances has been described in all Chinese materiae medicae. Since TCM has become more popular in the Western world, there are increasing concerns about the potential toxicity of many traditional Chinese plants, animal parts and minerals. Traditional Chinese herbal remedies are conveniently available from grocery stores in most Chinese neighborhoods; some of these items may contain toxic ingredients, are imported into the U.S. illegally, and are associated with claims of therapeutic benefit without evidence. For most compounds, efficacy and toxicity testing are based on traditional knowledge rather than laboratory analysis. The toxicity in some cases could be confirmed by modern research (i.e., in scorpion); in some cases it could not (i.e., in Curculigo). Traditional herbal medicines can contain extremely toxic chemicals and heavy metals, and naturally occurring toxins, which can cause illness, exacerbate pre-existing poor health or result in death. Botanical misidentification of plants can cause toxic reactions in humans. The description of some plants used in TCM has changed, leading to unintended poisoning by using the wrong plants. A concern is also contaminated herbal medicines with microorganisms and fungal toxins, including aflatoxin. Traditional herbal medicines are sometimes contaminated with toxic heavy metals, including lead, arsenic, mercury and cadmium, which inflict serious health risks to consumers. Also, adulteration of some herbal medicine preparations with conventional drugs which may cause serious adverse effects, such as corticosteroids, phenylbutazone, phenytoin, and glibenclamide, has been reported.
Substances known to be potentially dangerous include Aconitum, secretions from the Asiatic toad, powdered centipede, the Chinese beetle (Mylabris phalerata), certain fungi, Aristolochia, arsenic sulfide (realgar), mercury sulfide, and cinnabar. Asbestos ore (Actinolite, Yang Qi Shi, 阳起石) is used to treat impotence in TCM. Due to galena's (litharge, lead(II) oxide) high lead content, it is known to be toxic. Lead, mercury, arsenic, copper, cadmium, and thallium have been detected in TCM products sold in the U.S. and China.
To avoid its toxic adverse effects Xanthium sibiricum must be processed. Hepatotoxicity has been reported with products containing Reynoutria multiflora (synonym Polygonum multiflorum), glycyrrhizin, Senecio and Symphytum. The herbs indicated as being hepatotoxic included Dictamnus dasycarpus, Astragalus membranaceus, and Paeonia lactiflora. Contrary to popular belief, Ganoderma lucidum mushroom extract, as an adjuvant for cancer immunotherapy, appears to have the potential for toxicity. A 2013 review suggested that although the antimalarial herb Artemisia annua may not cause hepatotoxicity, haematotoxicity, or hyperlipidemia, it should be used cautiously during pregnancy due to a potential risk of embryotoxicity at a high dose.
However, many adverse reactions are due to misuse or abuse of Chinese medicine. For example, the misuse of the dietary supplement Ephedra (containing ephedrine) can lead to adverse events including gastrointestinal problems as well as sudden death from cardiomyopathy. Products adulterated with pharmaceuticals for weight loss or erectile dysfunction are one of the main concerns. Chinese herbal medicine has been a major cause of acute liver failure in China.
The harvesting of guano from bat caves (yemingsha) brings workers into close contact with these animals, increasing the risk of zoonosis. The Chinese virologist Shi Zhengli has identified dozens of SARS-like coronaviruses in samples of bat droppings.
Acupuncture and moxibustion
Acupuncture is the insertion of needles into superficial structures of the body (skin, subcutaneous tissue, muscles) – usually at acupuncture points (acupoints) – and their subsequent manipulation; this aims at influencing the flow of qi. According to TCM it relieves pain and treats (and prevents) various diseases. The US FDA classifies single-use acupuncture needles as Class II medical devices, under CFR 21.
Acupuncture is often accompanied by moxibustion – the Chinese characters for acupuncture () literally meaning "acupuncture-moxibustion" – which involves burning mugwort on or near the skin at an acupuncture point. According to the American Cancer Society, "available scientific evidence does not support claims that moxibustion is effective in preventing or treating cancer or any other disease".
In electroacupuncture, an electric current is applied to the needles once they are inserted, to further stimulate the respective acupuncture points.
A recent historian of Chinese medicine remarked that it is "nicely ironic that the specialty of acupuncture – arguably the most questionable part of their medical heritage for most Chinese at the start of the twentieth century – has become the most marketable aspect of Chinese medicine." She found that acupuncture as we know it today has hardly been in existence for sixty years. Moreover, the fine, filiform needle we think of as the acupuncture needle today was not widely used a century ago. Present day acupuncture was developed in the 1930s and put into wide practice only as late as the 1960s.
Efficacy
A 2013 editorial in the American journal Anesthesia and Analgesia stated that acupuncture studies produced inconsistent results, (i.e. acupuncture relieved pain in some conditions but had no effect in other very similar conditions) which suggests the presence of false positive results. These may be caused by factors like biased study design, poor blinding, and the classification of electrified needles (a type of TENS) as a form of acupuncture. The inability to find consistent results despite more than 3,000 studies, the editorial continued, suggests that the treatment seems to be a placebo effect and the existing equivocal positive results are the type of noise one expects to see after a large number of studies are performed on an inert therapy. The editorial concluded that the best controlled studies showed a clear pattern, in which the outcome does not rely upon needle location or even needle insertion, and since "these variables are those that define acupuncture, the only sensible conclusion is that acupuncture does not work."
According to the US NIH National Cancer Institute, a review of 17,922 patients reported that real acupuncture relieved muscle and joint pain, caused by aromatase inhibitors, much better than sham acupuncture. Regarding cancer patients, the review hypothesized that acupuncture may cause physical responses in nerve cells, the pituitary gland, and the brain – releasing proteins, hormones, and chemicals that are proposed to affect blood pressure, body temperature, immune activity, and endorphin release.
A 2012 meta-analysis concluded that the mechanisms of acupuncture "are clinically relevant, but that an important part of these total effects is not due to issues considered to be crucial by most acupuncturists, such as the correct location of points and depth of needling ... [but is] ... associated with more potent placebo or context effects". Commenting on this meta-analysis, both Edzard Ernst and David Colquhoun said the results were of negligible clinical significance. Reply to:
A 2011 overview of Cochrane reviews found evidence that suggests acupuncture is effective for some but not all kinds of pain. A 2010 systematic review found that there is evidence "that acupuncture provides a short-term clinically relevant effect when compared with a waiting list control or when acupuncture is added to another intervention" in the treatment of chronic low back pain. Two review articles discussing the effectiveness of acupuncture, from 2008 and 2009, have concluded that there is not enough evidence to conclude that it is effective beyond the placebo effect.
Acupuncture is generally safe when administered using Clean Needle Technique (CNT). Although serious adverse effects are rare, acupuncture is not without risk. Severe adverse effects, including very rarely death (five case reports), have been reported.
Tui na
Tui na () is a form of massage, based on the assumptions of TCM, from which shiatsu is thought to have evolved. Techniques employed may include thumb presses, rubbing, percussion, and assisted stretching.
Qigong
Qìgōng () is a TCM system of exercise and meditation that combines regulated breathing, slow movement, and focused awareness, purportedly to cultivate and balance qi. One branch of qigong is qigong massage, in which the practitioner combines massage techniques with awareness of the acupuncture channels and points.
Qi is air, breath, energy, or primordial life source that is neither matter or spirit. While Gong is a skillful movement, work, or exercise of the qi.
Forms
Neigong: introspective and meditative
Waigong: external energy and motion
Donggong: dynamic or active
Jinggong: tranquil or passive
Other therapies
Cupping
Cupping () is a type of Chinese massage, consisting of placing several glass "cups" (open spheres) on the body. A match is lit and placed inside the cup and then removed before placing the cup against the skin. As the air in the cup is heated, it expands, and after placing in the skin, cools, creating lower pressure inside the cup that allows the cup to stick to the skin via suction. When combined with massage oil, the cups can be slid around the back, offering "reverse-pressure massage".
Gua shaGua sha () is abrading the skin with pieces of smooth jade, bone, animal tusks or horns or smooth stones; until red spots then bruising cover the area to which it is done. It is believed that this treatment is for almost any ailment. The red spots and bruising take three to ten days to heal, there is often some soreness in the area that has been treated.
Die-da Diē-dǎ () or Dit Da, is a traditional Chinese bone-setting technique, usually practiced by martial artists who know aspects of Chinese medicine that apply to the treatment of trauma and injuries such as bone fractures, sprains, and bruises. Some of these specialists may also use or recommend other disciplines of Chinese medical therapies if serious injury is involved. Such practice of bone-setting () is not common in the West.
Chinese food therapy
The concepts yin and yang are associated with different classes of foods, and tradition considers it important to consume them in a balanced fashion. However, there is no scientific evidence supporting such claims, nor their implied notions.
Regulations
Many governments have enacted laws to regulate TCM practice.
Australia
From 1 July 2012 Chinese medicine practitioners must be registered under the national registration and accreditation scheme with the Chinese Medicine Board of Australia and meet the Board's Registration Standards, to practice in Australia.
Canada
TCM is regulated in five provinces in Canada: Alberta, British Columbia, Ontario, Quebec, and Newfoundland & Labrador.
China (mainland)
The National Administration of Traditional Chinese Medicine was created in 1949, which then absorbed existing TCM management in 1986 with major changes in 1998.
China's National People's Congress Standing Committee passed the country's first law on TCM in 2016, which came into effect on 1 July 2017. The new law standardized TCM certifications by requiring TCM practitioners to (i) pass exams administered by provincial-level TCM authorities, and (ii) obtain recommendations from two certified practitioners. TCM products and services can be advertised only with approval from the local TCM authority.
Hong Kong
During British rule, Chinese medicine practitioners in Hong Kong were not recognized as "medical doctors", which means they could not issue prescription drugs, give injections, etc. However, TCM practitioners could register and operate TCM as "herbalists". The Chinese Medicine Council of Hong Kong was established in 1999. It regulates the compounds and professional standards for TCM practitioners. All TCM practitioners in Hong Kong are required to register with the council. The eligibility for registration includes a recognised 5-year university degree of TCM, a 30-week minimum supervised clinical internship, and passing the licensing exam.
Currently, the approved Chinese medicine institutions are HKU, CUHK and HKBU.
Macau
The Portuguese Macau government seldom interfered in the affairs of Chinese society, including with regard to regulations on the practice of TCM. There were a few TCM pharmacies in Macau during the colonial period. In 1994, the Portuguese Macau government published Decree-Law no. 53/94/M that officially started to regulate the TCM market. After the sovereign handover, the Macau S.A.R. government also published regulations on the practice of TCM. In 2000, Macau University of Science and Technology and Nanjing University of Traditional Chinese Medicine established the Macau College of Traditional Chinese Medicine to offer a degree course in Chinese medicine.
In 2022, a new law regulating TCM, Law no. 11/2021, came into effect. The same law also repealed Decree-Law no. 53/94/M.
Indonesia
All traditional medicines, including TCM, are regulated by Indonesian Minister of Health Regulation of 2013 on traditional medicine. Traditional medicine license (Surat Izin Pengobatan Tradisional – SIPT) is granted to the practitioners whose methods are recognized as safe and may benefit health. The TCM clinics are registered but there is no explicit regulation for it. The only TCM method which is accepted by medical logic and is empirically proofed is acupuncture. The acupuncturists can get SIPT and participate in health care facilities.
Japan
Under modern Japanese medical law, it is possible for doctors to perform acupuncture and massage, but because there is a separate law regarding acupuncture and massage, these treatments are mainly performed by massage therapists, acupuncturists, and moxibustion practitioners.
Korea
Under the Medical Service Act (의료법/醫療法), an oriental medical doctor, whose obligation is to administer oriental medical treatment and provide guidance for health based on oriental medicine''', shall be treated in the same manner as a medical doctor or dentist.
The Korea Institute of Oriental Medicine is the top research center of TCM in Korea.
Malaysia
The Traditional and Complementary Medicine Bill was passed by parliament in 2012 establishing the Traditional and Complementary Medicine Council to register and regulate traditional and complementary medicine practitioners, including TCM practitioners as well as other traditional and complementary medicine practitioners such as those in traditional Malay medicine and traditional Indian medicine.
Netherlands
There are no specific regulations in the Netherlands on TCM; TCM is neither prohibited nor recognised by the government of the Netherlands. Chinese herbs as well as Chinese herbal products that are used in TCM are classified as foods and food supplements, and these Chinese herbs can be imported into the Netherlands as well as marketed as such without any type registration or notification to the government.
Despite its status, some private health insurance companies reimburse a certain amount of annual costs for acupuncture treatments, this depends on one's insurance policy, as not all insurance policies cover it, and if the acupuncture practitioner is or is not a member of one of the professional organisations that are recognised by private health insurance companies. The recognized professional organizations include the Nederlandse Vereniging voor Acupunctuur (NVA), Nederlandse Artsen Acupunctuur Vereniging (NAAV), ZHONG, (Nederlandse Vereniging voor Traditionele Chinese Geneeskunde), Nederlandse Beroepsvereniging Chinese Geneeswijzen Yi (NBCG Yi), and Wetenschappelijke Artsen Vereniging voor Acupunctuur in Nederland (WAVAN).
New Zealand
Although there are no regulatory standards for the practice of TCM in New Zealand, in the year 1990, acupuncture was included in the Governmental Accident Compensation Corporation (ACC) Act. This inclusion granted qualified and professionally registered acupuncturists to provide subsidised care and treatment to citizens, residents, and temporary visitors for work or sports related injuries that occurred within and upon the land of New Zealand. The two bodies for the regulation of acupuncture and attainment of ACC treatment provider status in New Zealand are Acupuncture NZ and The New Zealand Acupuncture Standards Authority.
Singapore
The TCM Practitioners Act was passed by Parliament in 2000 and the TCM Practitioners Board was established in 2001 as a statutory board under the Ministry of Health, to register and regulate TCM practitioners. The requirements for registration include possession of a diploma or degree from a TCM educational institution/university on a gazetted list, either structured TCM clinical training at an approved local TCM educational institution or foreign TCM registration together with supervised TCM clinical attachment/practice at an approved local TCM clinic, and upon meeting these requirements, passing the Singapore TCM Physicians Registration Examination (STRE) conducted by the TCM Practitioners Board.
In 2024, Nanyang Technological University will offer the four-year Bachelor of Chinese Medicine programme, which is the first local programme accredited by the Ministry of Health.
Taiwan
In Taiwan, TCM practitioners are physicians and are regulated by the Physicians Act. They possess the authority to independently diagnose medical conditions, issue prescriptions, dispense Traditional Chinese Medicine, and prescribe a variety of diagnostic tests including X-rays, ECG, and blood and urine test.
Under current law, those who wish to qualify for the Chinese medicine exam must have obtained a 7-year university degree in TCM.
The National Research Institute of Chinese Medicine, established in 1963, is the largest Chinese herbal medicine research center in Taiwan.
United States
As of July 2012, only six states lack legislation to regulate the professional practice of TCM: Alabama, Kansas, North Dakota, South Dakota, Oklahoma, and Wyoming. In 1976, California established an Acupuncture Board and became the first state licensing professional acupuncturists.
| Biology and health sciences | Alternative and traditional medicine | null |
5993 | https://en.wikipedia.org/wiki/Chemical%20bond | Chemical bond | A chemical bond is the association of atoms or ions to form molecules, crystals, and other structures. The bond may result from the electrostatic force between oppositely charged ions as in ionic bonds or through the sharing of electrons as in covalent bonds, or some combination of these effects. Chemical bonds are described as having different strengths: there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole–dipole interactions, the London dispersion force, and hydrogen bonding.
Since opposite electric charges attract, the negatively charged electrons surrounding the nucleus and the positively charged protons within a nucleus attract each other. Electrons shared between two nuclei will be attracted to both of them. "Constructive quantum mechanical wavefunction interference" stabilizes the paired nuclei (see Theories of chemical bonding). Bonded nuclei maintain an optimal distance (the bond distance) balancing attractive and repulsive effects explained quantitatively by quantum theory.
The atoms in molecules, crystals, metals and other forms of matter are held together by chemical bonds, which determine the structure and properties of matter.
All bonds can be described by quantum theory, but, in practice, simplified rules and other theories allow chemists to predict the strength, directionality, and polarity of bonds. The octet rule and VSEPR theory are examples. More sophisticated theories are valence bond theory, which includes orbital hybridization and resonance, and molecular orbital theory which includes the linear combination of atomic orbitals and ligand field theory. Electrostatics are used to describe bond polarities and the effects they have on chemical substances.
Overview of main types of chemical bonds
A chemical bond is an attraction between atoms. This attraction may be seen as the result of different behaviors of the outermost or valence electrons of atoms. These behaviors merge into each other seamlessly in various circumstances, so that there is no clear line to be drawn between them. However it remains useful and customary to differentiate between different types of bond, which result in different properties of condensed matter.
In the simplest view of a covalent bond, one or more electrons (often a pair of electrons) are drawn into the space between the two atomic nuclei. Energy is released by bond formation. This is not as a result of reduction in potential energy, because the attraction of the two electrons to the two protons is offset by the electron-electron and proton-proton repulsions. Instead, the release of energy (and hence stability of the bond) arises from the reduction in kinetic energy due to the electrons being in a more spatially distributed (i.e. longer de Broglie wavelength) orbital compared with each electron being confined closer to its respective nucleus. These bonds exist between two particular identifiable atoms and have a direction in space, allowing them to be shown as single connecting lines between atoms in drawings, or modeled as sticks between spheres in models.
In a polar covalent bond, one or more electrons are unequally shared between two nuclei. Covalent bonds often result in the formation of small collections of better-connected atoms called molecules, which in solids and liquids are bound to other molecules by forces that are often much weaker than the covalent bonds that hold the molecules internally together. Such weak intermolecular bonds give organic molecular substances, such as waxes and oils, their soft bulk character, and their low melting points (in liquids, molecules must cease most structured or oriented contact with each other). When covalent bonds link long chains of atoms in large molecules, however (as in polymers such as nylon), or when covalent bonds extend in networks through solids that are not composed of discrete molecules (such as diamond or quartz or the silicate minerals in many types of rock) then the structures that result may be both strong and tough, at least in the direction oriented correctly with networks of covalent bonds. Also, the melting points of such covalent polymers and networks increase greatly.
In a simplified view of an ionic bond, the bonding electron is not shared at all, but transferred. In this type of bond, the outer atomic orbital of one atom has a vacancy which allows the addition of one or more electrons. These newly added electrons potentially occupy a lower energy-state (effectively closer to more nuclear charge) than they experience in a different atom. Thus, one nucleus offers a more tightly bound position to an electron than does another nucleus, with the result that one atom may transfer an electron to the other. This transfer causes one atom to assume a net positive charge, and the other to assume a net negative charge. The bond then results from electrostatic attraction between the positive and negatively charged ions. Ionic bonds may be seen as extreme examples of polarization in covalent bonds. Often, such bonds have no particular orientation in space, since they result from equal electrostatic attraction of each ion to all ions around them. Ionic bonds are strong (and thus ionic substances require high temperatures to melt) but also brittle, since the forces between ions are short-range and do not easily bridge cracks and fractures. This type of bond gives rise to the physical characteristics of crystals of classic mineral salts, such as table salt.
A less often mentioned type of bonding is metallic bonding. In this type of bonding, each atom in a metal donates one or more electrons to a "sea" of electrons that reside between many metal atoms. In this sea, each electron is free (by virtue of its wave nature) to be associated with a great many atoms at once. The bond results because the metal atoms become somewhat positively charged due to loss of their electrons while the electrons remain attracted to many atoms, without being part of any given atom. Metallic bonding may be seen as an extreme example of delocalization of electrons over a large system of covalent bonds, in which every atom participates. This type of bonding is often very strong (resulting in the tensile strength of metals). However, metallic bonding is more collective in nature than other types, and so they allow metal crystals to more easily deform, because they are composed of atoms attracted to each other, but not in any particularly-oriented ways. This results in the malleability of metals. The cloud of electrons in metallic bonding causes the characteristically good electrical and thermal conductivity of metals, and also their shiny lustre that reflects most frequencies of white light.
History
Early speculations about the nature of the chemical bond, from as early as the 12th century, supposed that certain types of chemical species were joined by a type of chemical affinity. In 1704, Sir Isaac Newton famously outlined his atomic bonding theory, in "Query 31" of his Opticks, whereby atoms attach to each other by some "force". Specifically, after acknowledging the various popular theories in vogue at the time, of how atoms were reasoned to attach to each other, i.e. "hooked atoms", "glued together by rest", or "stuck together by conspiring motions", Newton states that he would rather infer from their cohesion, that "particles attract one another by some force, which in immediate contact is exceedingly strong, at small distances performs the chemical operations, and reaches not far from the particles with any sensible effect."
In 1819, on the heels of the invention of the voltaic pile, Jöns Jakob Berzelius developed a theory of chemical combination stressing the electronegative and electropositive characters of the combining atoms. By the mid 19th century, Edward Frankland, F.A. Kekulé, A.S. Couper, Alexander Butlerov, and Hermann Kolbe, building on the theory of radicals, developed the theory of valency, originally called "combining power", in which compounds were joined owing to an attraction of positive and negative poles. In 1904, Richard Abegg proposed his rule that the difference between the maximum and minimum valencies of an element is often eight. At this point, valency was still an empirical number based only on chemical properties.
However the nature of the atom became clearer with Ernest Rutherford's 1911 discovery that of an atomic nucleus surrounded by electrons in which he quoted Nagaoka rejected Thomson's model on the grounds that opposite charges are impenetrable. In 1904, Nagaoka proposed an alternative planetary model of the atom in which a positively charged center is surrounded by a number of revolving electrons, in the manner of Saturn and its rings.
Nagaoka's model made two predictions:
a very massive atomic center (in analogy to a very massive planet)
electrons revolving around the nucleus, bound by electrostatic forces (in analogy to the rings revolving around Saturn, bound by gravitational forces.)
Rutherford mentions Nagaoka's model in his 1911 paper in which the atomic nucleus is proposed.
At the 1911 Solvay Conference, in the discussion of what could regulate energy differences between atoms, Max Planck stated: "The intermediaries could be the electrons." These nuclear models suggested that electrons determine chemical behavior.
Next came Niels Bohr's 1913 model of a nuclear atom with electron orbits. In 1916, chemist Gilbert N. Lewis developed the concept of electron-pair bonds, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond; in Lewis's own words, "An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively."
Also in 1916, Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904).
Niels Bohr also proposed a model of the chemical bond in 1913. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.
In 1927, the first mathematically complete quantum description of a simple chemical bond, i.e. that produced by one electron in the hydrogen molecular ion, H2+, was derived by the Danish physicist Øyvind Burrau. This work showed that the quantum approach to chemical bonds could be fundamentally and quantitatively correct, but the mathematical methods used could not be extended to molecules containing more than one electron. A more practical, albeit less quantitative, approach was put forward in the same year by Walter Heitler and Fritz London. The Heitler–London method forms the basis of what is now called valence bond theory. In 1929, the linear combination of atomic orbitals molecular orbital method (LCAO) approximation was introduced by Sir John Lennard-Jones, who also suggested methods to derive electronic structures of molecules of F2 (fluorine) and O2 (oxygen) molecules, from basic quantum principles. This molecular orbital theory represented a covalent bond as an orbital formed by combining the quantum mechanical Schrödinger atomic orbitals which had been hypothesized for electrons in single atoms. The equations for bonding electrons in multi-electron atoms could not be solved to mathematical perfection (i.e., analytically), but approximations for them still gave many good qualitative predictions and results. Most quantitative calculations in modern quantum chemistry use either valence bond or molecular orbital theory as a starting point, although a third approach, density functional theory, has become increasingly popular in recent years.
In 1933, H. H. James and A. S. Coolidge carried out a calculation on the dihydrogen molecule that, unlike all previous calculation which used functions only of the distance of the electron from the atomic nucleus, used functions which also explicitly added the distance between the two electrons. With up to 13 adjustable parameters they obtained a result very close to the experimental result for the dissociation energy. Later extensions have used up to 54 parameters and gave excellent agreement with experiments. This calculation convinced the scientific community that quantum theory could give agreement with experiment. However this approach has none of the physical pictures of the valence bond and molecular orbital theories and is difficult to extend to larger molecules.
Bonds in chemical formulas
Because atoms and molecules are three-dimensional, it is difficult to use a single method to indicate orbitals and bonds. In molecular formulas the chemical bonds (binding orbitals) between atoms are indicated in different ways depending on the type of discussion. Sometimes, some details are neglected. For example, in organic chemistry one is sometimes concerned only with the functional group of the molecule. Thus, the molecular formula of ethanol may be written in conformational form, three-dimensional form, full two-dimensional form (indicating every bond with no three-dimensional directions), compressed two-dimensional form (CH3–CH2–OH), by separating the functional group from another part of the molecule (C2H5OH), or by its atomic constituents (C2H6O), according to what is discussed. Sometimes, even the non-bonding valence shell electrons (with the two-dimensional approximate directions) are marked, e.g. for elemental carbon .'C'. Some chemists may also mark the respective orbitals, e.g. the hypothetical ethene−4 anion (\/C=C/\ −4) indicating the possibility of bond formation.
Strong chemical bonds
Strong chemical bonds are the intramolecular forces that hold atoms together in molecules. A strong chemical bond is formed from the transfer or sharing of electrons between atomic centers and relies on the electrostatic attraction between the protons in nuclei and the electrons in the orbitals.
The types of strong bond differ due to the difference in electronegativity of the constituent elements. Electronegativity is the tendency for an atom of a given chemical element to attract shared electrons when forming a chemical bond, where the higher the associated electronegativity then the more it attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, which characterizes a bond along the continuous scale from covalent to ionic bonding. A large difference in electronegativity leads to more polar (ionic) character in the bond.
Ionic bond
Ionic bonding is a type of electrostatic interaction between atoms that have a large electronegativity difference. There is no precise value that distinguishes ionic from covalent bonding, but an electronegativity difference of over 1.7 is likely to be ionic while a difference of less than 1.7 is likely to be covalent. Ionic bonding leads to separate positive and negative ions. Ionic charges are commonly between −3e to +3e. Ionic bonding commonly occurs in metal salts such as sodium chloride (table salt). A typical feature of ionic bonds is that the species form into ionic crystals, in which no ion is specifically paired with any single other ion in a specific directional bond. Rather, each species of ion is surrounded by ions of the opposite charge, and the spacing between it and each of the oppositely charged ions near it is the same for all surrounding atoms of the same type. It is thus no longer possible to associate an ion with any specific other single ionized atom near it. This is a situation unlike that in covalent crystals, where covalent bonds between specific atoms are still discernible from the shorter distances between them, as measured via such techniques as X-ray diffraction.
Ionic crystals may contain a mixture of covalent and ionic species, as for example salts of complex acids such as sodium cyanide, NaCN. X-ray diffraction shows that in NaCN, for example, the bonds between sodium cations (Na+) and the cyanide anions (CN−) are ionic, with no sodium ion associated with any particular cyanide. However, the bonds between the carbon (C) and nitrogen (N) atoms in cyanide are of the covalent type, so that each carbon is strongly bound to just one nitrogen, to which it is physically much closer than it is to other carbons or nitrogens in a sodium cyanide crystal.
When such crystals are melted into liquids, the ionic bonds are broken first because they are non-directional and allow the charged species to move freely. Similarly, when such salts dissolve into water, the ionic bonds are typically broken by the interaction with water but the covalent bonds continue to hold. For example, in solution, the cyanide ions, still bound together as single CN− ions, move independently through the solution, as do sodium ions, as Na+. In water, charged ions move apart because each of them are more strongly attracted to a number of water molecules than to each other. The attraction between ions and water molecules in such solutions is due to a type of weak dipole-dipole type chemical bond. In melted ionic compounds, the ions continue to be attracted to each other, but not in any ordered or crystalline way.
Covalent bond
Covalent bonding is a common type of bonding in which two or more atoms share valence electrons more or less equally. The simplest and most common type is a single bond in which two atoms share two electrons. Other types include the double bond, the triple bond, one- and three-electron bonds, the three-center two-electron bond and three-center four-electron bond.
In non-polar covalent bonds, the electronegativity difference between the bonded atoms is small, typically 0 to 0.3. Bonds within most organic compounds are described as covalent. The figure shows methane (CH4), in which each hydrogen forms a covalent bond with the carbon. See sigma bonds and pi bonds for LCAO descriptions of such bonding.
Molecules that are formed primarily from non-polar covalent bonds are often immiscible in water or other polar solvents, but much more soluble in non-polar solvents such as hexane.
A polar covalent bond is a covalent bond with a significant ionic character. This means that the two shared electrons are closer to one of the atoms than the other, creating an imbalance of charge. Such bonds occur between two atoms with moderately different electronegativities and give rise to dipole–dipole interactions. The electronegativity difference between the two atoms in these bonds is 0.3 to 1.7.
Single and multiple bonds
A single bond between two atoms corresponds to the sharing of one pair of electrons. The Hydrogen (H) atom has one valence electron. Two Hydrogen atoms can then form a molecule, held together by the shared pair of electrons. Each H atom now has the noble gas electron configuration of helium (He). The pair of shared electrons forms a single covalent bond. The electron density of these two bonding electrons in the region between the two atoms increases from the density of two non-interacting H atoms.
A double bond has two shared pairs of electrons, one in a sigma bond and one in a pi bond with electron density concentrated on two opposite sides of the internuclear axis. A triple bond consists of three shared electron pairs, forming one sigma and two pi bonds. An example is nitrogen. Quadruple and higher bonds are very rare and occur only between certain transition metal atoms.
Coordinate covalent bond (dipolar bond)
A coordinate covalent bond is a covalent bond in which the two shared bonding electrons are from the same one of the atoms involved in the bond. For example, boron trifluoride (BF3) and ammonia (NH3) form an adduct or coordination complex F3B←NH3 with a B–N bond in which a lone pair of electrons on N is shared with an empty atomic orbital on B. BF3 with an empty orbital is described as an electron pair acceptor or Lewis acid, while NH3 with a lone pair that can be shared is described as an electron-pair donor or Lewis base. The electrons are shared roughly equally between the atoms in contrast to ionic bonding. Such bonding is shown by an arrow pointing to the Lewis acid. (In the Figure, solid lines are bonds in the plane of the diagram, wedged bonds point towards the observer, and dashed bonds point away from the observer.)
Transition metal complexes are generally bound by coordinate covalent bonds. For example, the ion Ag+ reacts as a Lewis acid with two molecules of the Lewis base NH3 to form the complex ion Ag(NH3)2+, which has two Ag←N coordinate covalent bonds.
Metallic bonding
In metallic bonding, bonding electrons are delocalized over a lattice of atoms. By contrast, in ionic compounds, the locations of the binding electrons and their charges are static. The free movement or delocalization of bonding electrons leads to classical metallic properties such as luster (surface light reflectivity), electrical and thermal conductivity, ductility, and high tensile strength.
Intermolecular bonding
There are several types of weak bonds that can be formed between two or more molecules which are not covalently bound. Intermolecular forces cause molecules to attract or repel each other. Often, these forces influence physical characteristics (such as the melting point) of a substance.
Van der Waals forces are interactions between closed-shell molecules. They include both Coulombic interactions between partial charges in polar molecules, and Pauli repulsions between closed electrons shells.
Keesom forces are the forces between the permanent dipoles of two polar molecules. London dispersion forces are the forces between induced dipoles of different molecules. There can also be an interaction between a permanent dipole in one molecule and an induced dipole in another molecule.
Hydrogen bonds of the form A--H•••B occur when A and B are two highly electronegative atoms (usually N, O or F) such that A forms a highly polar covalent bond with H so that H has a partial positive charge, and B has a lone pair of electrons which is attracted to this partial positive charge and forms a hydrogen bond. Hydrogen bonds are responsible for the high boiling points of water and ammonia with respect to their heavier analogues. In some cases a similar halogen bond can be formed by a halogen atom located between two electronegative atoms on different molecules.
At short distances, repulsive forces between atoms also become important.
Theories of chemical bonding
In the (unrealistic) limit of "pure" ionic bonding, electrons are perfectly localized on one of the two atoms in the bond. Such bonds can be understood by classical physics. The force between the atoms depends on isotropic continuum electrostatic potentials. The magnitude of the force is in simple proportion to the product of the two ionic charges according to Coulomb's law.
Covalent bonds are better understood by valence bond (VB) theory or molecular orbital (MO) theory. The properties of the atoms involved can be understood using concepts such as oxidation number, formal charge, and electronegativity. The electron density within a bond is not assigned to individual atoms, but is instead delocalized between atoms. In valence bond theory, bonding is conceptualized as being built up from electron pairs that are localized and shared by two atoms via the overlap of atomic orbitals. The concepts of orbital hybridization and resonance augment this basic notion of the electron pair bond. In molecular orbital theory, bonding is viewed as being delocalized and apportioned in orbitals that extend throughout the molecule and are adapted to its symmetry properties, typically by considering linear combinations of atomic orbitals (LCAO). Valence bond theory is more chemically intuitive by being spatially localized, allowing attention to be focused on the parts of the molecule undergoing chemical change. In contrast, molecular orbitals are more "natural" from a quantum mechanical point of view, with orbital energies being physically significant and directly linked to experimental ionization energies from photoelectron spectroscopy. Consequently, valence bond theory and molecular orbital theory are often viewed as competing but complementary frameworks that offer different insights into chemical systems. As approaches for electronic structure theory, both MO and VB methods can give approximations to any desired level of accuracy, at least in principle. However, at lower levels, the approximations differ, and one approach may be better suited for computations involving a particular system or property than the other.
Unlike the spherically symmetrical Coulombic forces in pure ionic bonds, covalent bonds are generally directed and anisotropic. These are often classified based on their symmetry with respect to a molecular plane as sigma bonds and pi bonds. In the general case, atoms form bonds that are intermediate between ionic and covalent, depending on the relative electronegativity of the atoms involved. Bonds of this type are known as polar covalent bonds.
| Physical sciences | Chemistry | null |
5999 | https://en.wikipedia.org/wiki/Climate | Climate | Climate is the long-term weather pattern in a region, typically averaged over 30 years. More rigorously, it is the mean and variability of meteorological variables over a time spanning from months to millions of years. Some of the meteorological variables that are commonly measured are temperature, humidity, atmospheric pressure, wind, and precipitation. In a broader sense, climate is the state of the components of the climate system, including the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere and the interactions between them. The climate of a location is affected by its latitude, longitude, terrain, altitude, land use and nearby water bodies and their currents.
Climates can be classified according to the average and typical variables, most commonly temperature and precipitation. The most widely used classification scheme is the Köppen climate classification. The Thornthwaite system, in use since 1948, incorporates evapotranspiration along with temperature and precipitation information and is used in studying biological diversity and how climate change affects it. The major classifications in Thornthwaite's climate classification are microthermal, mesothermal, and megathermal. Finally, the Bergeron and Spatial Synoptic Classification systems focus on the origin of air masses that define the climate of a region.
Paleoclimatology is the study of ancient climates. Paleoclimatologists seek to explain climate variations for all parts of the Earth during any given geologic period, beginning with the time of the Earth's formation. Since very few direct observations of climate were available before the 19th century, paleoclimates are inferred from proxy variables. They include non-biotic evidence—such as sediments found in lake beds and ice cores—and biotic evidence—such as tree rings and coral. Climate models are mathematical models of past, present, and future climates. Climate change may occur over long and short timescales due to various factors. Recent warming is discussed in terms of global warming, which results in redistributions of biota. For example, as climate scientist Lesley Ann Hughes has written: "a 3 °C [5 °F] change in mean annual temperature corresponds to a shift in isotherms of approximately in latitude (in the temperate zone) or in elevation. Therefore, species are expected to move upwards in elevation or towards the poles in latitude in response to shifting climate zones."
Definition
Climate () is commonly defined as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose. Climate also includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations. The Intergovernmental Panel on Climate Change (IPCC) 2001 glossary definition is as follows:
The World Meteorological Organization (WMO) describes "climate normals" as "reference points used by climatologists to compare current climatological trends to that of the past or what is considered typical. A climate normal is defined as the arithmetic average of a climate element (e.g. temperature) over a 30-year period. A 30-year period is used as it is long enough to filter out any interannual variation or anomalies such as El Niño–Southern Oscillation, but also short enough to be able to show longer climatic trends."
The WMO originated from the International Meteorological Organization which set up a technical commission for climatology in 1929. At its 1934 Wiesbaden meeting, the technical commission designated the thirty-year period from 1901 to 1930 as the reference time frame for climatological standard normals. In 1982, the WMO agreed to update climate normals, and these were subsequently completed on the basis of climate data from 1 January 1961 to 31 December 1990. The 1961–1990 climate normals serve as the baseline reference period. The next set of climate normals to be published by WMO is from 1991 to 2010. Aside from collecting from the most common atmospheric variables (air temperature, pressure, precipitation and wind), other variables such as humidity, visibility, cloud amount, solar radiation, soil temperature, pan evaporation rate, days with thunder and days with hail are also collected to measure change in climate conditions.
The difference between climate and weather is usefully summarized by the popular phrase "Climate is what you expect, weather is what you get." Over historical time spans, there are a number of nearly constant variables that determine climate, including latitude, altitude, proportion of land to water, and proximity to oceans and mountains. All of these variables change only over periods of millions of years due to processes such as plate tectonics. Other climate determinants are more dynamic: the thermohaline circulation of the ocean leads to a 5 °C (9 °F) warming of the northern Atlantic Ocean compared to other ocean basins. Other ocean currents redistribute heat between land and water on a more regional scale. The density and type of vegetation coverage affects solar heat absorption, water retention, and rainfall on a regional level. Alterations in the quantity of atmospheric greenhouse gases (particularly carbon dioxide and methane) determines the amount of solar energy retained by the planet, leading to global warming or global cooling. The variables which determine climate are numerous and the interactions complex, but there is general agreement that the broad outlines are understood, at least insofar as the determinants of historical climate change are concerned.
Climate classification
Climate classifications are systems that categorize the world's climates. A climate classification may correlate closely with a biome classification, as climate is a major influence on life in a region. One of the most used is the Köppen climate classification scheme first developed in 1899.
There are several ways to classify climates into similar regimes. Originally, climes were defined in Ancient Greece to describe the weather depending upon a location's latitude. Modern climate classification methods can be broadly divided into genetic methods, which focus on the causes of climate, and empiric methods, which focus on the effects of climate. Examples of genetic classification include methods based on the relative frequency of different air mass types or locations within synoptic weather disturbances. Examples of empiric classifications include climate zones defined by plant hardiness, evapotranspiration, or more generally the Köppen climate classification which was originally designed to identify the climates associated with certain biomes. A common shortcoming of these classification schemes is that they produce distinct boundaries between the zones they define, rather than the gradual transition of climate properties more common in nature.
Record
Paleoclimatology
Paleoclimatology is the study of past climate over a great period of the Earth's history. It uses evidence with different time scales (from decades to millennia) from ice sheets, tree rings, sediments, pollen, coral, and rocks to determine the past state of the climate. It demonstrates periods of stability and periods of change and can indicate whether changes follow patterns such as regular cycles.
Modern
Details of the modern climate record are known through the taking of measurements from such weather instruments as thermometers, barometers, and anemometers during the past few centuries. The instruments used to study weather over the modern time scale, their observation frequency, their known error, their immediate environment, and their exposure have changed over the years, which must be considered when studying the climate of centuries past. Long-term modern climate records skew towards population centres and affluent countries. Since the 1960s, the launch of satellites allow records to be gathered on a global scale, including areas with little to no human presence, such as the Arctic region and oceans.
Climate variability
Climate variability is the term to describe variations in the mean state and other characteristics of climate (such as chances or possibility of extreme weather, etc.) "on all spatial and temporal scales beyond that of individual weather events." Some of the variability does not appear to be caused systematically and occurs at random times. Such variability is called random variability or noise. On the other hand, periodic variability occurs relatively regularly and in distinct modes of variability or climate patterns.
There are close correlations between Earth's climate oscillations and astronomical factors (barycenter changes, solar variation, cosmic ray flux, cloud albedo feedback, Milankovic cycles), and modes of heat distribution between the ocean-atmosphere climate system. In some cases, current, historical and paleoclimatological natural oscillations may be masked by significant volcanic eruptions, impact events, irregularities in climate proxy data, positive feedback processes or anthropogenic emissions of substances such as greenhouse gases.
Over the years, the definitions of climate variability and the related term climate change have shifted. While the term climate change now implies change that is both long-term and of human causation, in the 1960s the word climate change was used for what we now describe as climate variability, that is, climatic inconsistencies and anomalies.
Climate change
Climate change is the variation in global or regional climates over time. It reflects changes in the variability or average state of the atmosphere over time scales ranging from decades to millions of years. These changes can be caused by processes internal to the Earth, external forces (e.g. variations in sunlight intensity) or human activities, as found recently. Scientists have identified Earth's Energy Imbalance (EEI) to be a fundamental metric of the status of global change.
In recent usage, especially in the context of environmental policy, the term "climate change" often refers only to changes in modern climate, including the rise in average surface temperature known as global warming. In some cases, the term is also used with a presumption of human causation, as in the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC uses "climate variability" for non-human caused variations.
Earth has undergone periodic climate shifts in the past, including four major ice ages. These consist of glacial periods where conditions are colder than normal, separated by interglacial periods. The accumulation of snow and ice during a glacial period increases the surface albedo, reflecting more of the Sun's energy into space and maintaining a lower atmospheric temperature. Increases in greenhouse gases, such as by volcanic activity, can increase the global temperature and produce an interglacial period. Suggested causes of ice age periods include the positions of the continents, variations in the Earth's orbit, changes in the solar output, and volcanism. However, these naturally caused changes in climate occur on a much slower time scale than the present rate of change which is caused by the emission of greenhouse gases by human activities.
According to the EU's Copernicus Climate Change Service, average global air temperature has passed 1.5C of warming the period from February 2023 to January 2024.
Climate models
Climate models use quantitative methods to simulate the interactions and transfer of radiative energy between the atmosphere, oceans, land surface and ice through a series of physics equations. They are used for a variety of purposes, from the study of the dynamics of the weather and climate system to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the Earth with outgoing energy as long wave (infrared) electromagnetic radiation from the Earth. Any imbalance results in a change in the average temperature of the Earth.
Climate models are available on different resolutions ranging from >100 km to 1 km. High resolutions in global climate models require significant computational resources, and so only a few global datasets exist. Global climate models can be dynamically or statistically downscaled to regional climate models to analyze impacts of climate change on a local scale. Examples are ICON or mechanistically downscaled data such as CHELSA (Climatologies at high resolution for the earth's land surface areas).
The most talked-about applications of these models in recent years have been their use to infer the consequences of increasing greenhouse gases in the atmosphere, primarily carbon dioxide (see greenhouse gas). These models predict an upward trend in the global mean surface temperature, with the most rapid increase in temperature being projected for the higher latitudes of the Northern Hemisphere.
Models can range from relatively simple to quite complex. Simple radiant heat transfer models treat the Earth as a single point and average outgoing energy. This can be expanded vertically (as in radiative-convective models), or horizontally. Finally, more complex (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
| Physical sciences | Science and medicine | null |
6010 | https://en.wikipedia.org/wiki/Computer%20worm | Computer worm | A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behaviour will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on exploiting the advantages of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as the Morris worm and Mydoom showed, even these "payload-free" worms can cause major disruption by increasing network traffic and other unintended effects.
History
The term "worm" was first used in this sense in John Brunner's 1975 novel, The Shockwave Rider. In the novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it. There's never been a worm with that tough a head or that long a tail!" "Then the answer dawned on him, and he almost laughed. Fluckner had resorted to one of the oldest tricks in the store and turned loose in the continental net a self-perpetuating tapeworm, probably headed by a denunciation group "borrowed" from a major corporation, which would shunt itself from one nexus to another every time his credit-code was punched into a keyboard. It could take days to kill a worm like that, and sometimes weeks."
The second ever computer worm was devised to be an anti-virus software. Named Reaper, it was created by Ray Tomlinson to replicate itself across the ARPANET and delete the experimental Creeper program (the first computer worm, 1971).
On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed what became known as the Morris worm, disrupting many computers then on the Internet, guessed at the time to be one tenth of all those connected. During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the worm from each installation at between $200 and $53,000; this work prompted the formation of the CERT Coordination Center and Phage mailing list. Morris himself became the first person tried and convicted under the 1986 Computer Fraud and Abuse Act.
Conficker, a computer worm discovered in 2008 that primarily targeted Microsoft Windows operating systems, is a worm that employs three different spreading strategies: local probing, neighborhood probing, and global probing. This worm was considered a hybrid epidemic and affected millions of computers. The term "hybrid epidemic" is used because of the three separate methods it employed to spread, which was discovered through code analysis.
Features
Independence
Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. A worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks.
Exploit attacks
Because a worm is not limited by the host program, worms can take advantage of various operating system vulnerabilities to carry out active attacks. For example, the "Nimda" virus exploits vulnerabilities to attack.
Complexity
Some worms are combined with web page scripts, and are hidden in HTML pages using VBScript, ActiveX and other technologies. When a user accesses a webpage containing a virus, the virus automatically resides in memory and waits to be triggered. There are also some worms that are combined with backdoor programs or Trojan horses, such as "Code Red".
Contagiousness
Worms are more infectious than traditional viruses. They not only infect local computers, but also all servers and clients on the network based on the local computer. Worms can easily spread through shared folders, e-mails, malicious web pages, and servers with a large number of vulnerabilities in the network.
Harm
Any code designed to do more than spread the worm is typically referred to as the "payload". Typical malicious payloads might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a ransomware attack, or exfiltrate data such as confidential documents or passwords.
Some worms may install a backdoor. This allows the computer to be remotely controlled by the worm author as a "zombie". Networks of such machines are often referred to as botnets and are very commonly used for a range of malicious purposes, including sending spam or performing DoS attacks.
Some special worms attack industrial systems in a targeted manner. Stuxnet was primarily transmitted through LANs and infected thumb-drives, as its targets were never connected to untrusted networks, like the internet. This virus can destroy the core production control computer software used by chemical, power generation and power transmission companies in various countries around the world - in Stuxnet's case, Iran, Indonesia and India were hardest hit - it was used to "issue orders" to other equipment in the factory, and to hide those commands from being detected. Stuxnet used multiple vulnerabilities and four different zero-day exploits (e.g.: ) in Windows systems and Siemens SIMATICWinCC systems to attack the embedded programmable logic controllers of industrial machines. Although these systems operate independently from the network, if the operator inserts a virus-infected drive into the system's USB interface, the virus will be able to gain control of the system without any other operational requirements or prompts.
Countermeasures
Worms spread by exploiting vulnerabilities in operating systems.
Vendors with security problems supply regular security updates (see "Patch Tuesday"), and if these are installed to a machine, then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, a zero-day attack is possible.
Users need to be wary of opening unexpected emails, and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and efficiency of phishing attacks, it remains possible to trick the end-user into running malicious code.
Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of a firewall is also recommended.
Users can minimize the threat posed by worms by keeping their computers' operating system and other software up to date, avoiding opening unrecognized or unexpected emails and running firewall and antivirus software.
Mitigation techniques include:
ACLs in routers and switches
Packet-filters
TCP Wrapper/ACL enabled network service daemons
EPP/EDR software
Nullroute
Infections can sometimes be detected by their behavior - typically scanning the Internet randomly, looking for vulnerable hosts to infect. In addition, machine learning techniques can be used to detect new worms, by analyzing the behavior of the suspected computer.
Worms with good intent
A helpful worm or anti-worm is a worm designed to do something that its author feels is helpful, though not necessarily with the permission of the executing computer's owner. Beginning with the first research into worms at Xerox PARC, there have been attempts to create useful worms. Those worms allowed John Shoch and Jon Hupp to test the Ethernet principles on their network of Xerox Alto computers. Similarly, the Nachi family of worms tried to download and install patches from Microsoft's website to fix vulnerabilities in the host system by exploiting those same vulnerabilities. In practice, although this may have made these systems more secure, it generated considerable network traffic, rebooted the machine in the course of patching it, and did its work without the consent of the computer's owner or user. Regardless of their payload or their writers' intentions, security experts regard all worms as malware. Another example of this approach is Roku OS patching a bug allowing for Roku OS to be rooted via an update to their screensaver channels, which the screensaver would attempt to connect to the telnet and patch the device.
One study proposed the first computer worm that operates on the second layer of the OSI model (Data link Layer), utilizing topology information such as Content-addressable memory (CAM) tables and Spanning Tree information stored in switches to propagate and probe for vulnerable nodes until the enterprise network is covered.
Anti-worms have been used to combat the effects of the Code Red, Blaster, and Santy worms. Welchia is an example of a helpful worm. Utilizing the same deficiencies exploited by the Blaster worm, Welchia infected computers and automatically began downloading Microsoft security updates for Windows without the users' consent. Welchia automatically reboots the computers it infects after installing the updates. One of these updates was the patch that fixed the exploit.
Other examples of helpful worms are "Den_Zuko", "Cheeze", "CodeGreen", and "Millenium".
Art worms support artists in the performance of massive scale ephemeral artworks. It turns the infected computers into nodes that contribute to the artwork.
| Technology | Computer security | null |
6011 | https://en.wikipedia.org/wiki/Chomsky%20hierarchy | Chomsky hierarchy | The Chomsky hierarchy in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a language's vocabulary (or alphabet) that are valid according to the language's syntax. The linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes (set inclusive).
History
The general idea of a hierarchy of grammars was first described by Noam Chomsky in "Three models for the description of language" during the formalization of transformational-generative grammar (TGG). Marcel-Paul Schützenberger also played a role in the development of the theory of formal languages; the paper "The algebraic theory of context free languages" describes the modern hierarchy, including context-free grammars.
Independently, alongside linguists, mathematicians were developing models of computation (via automata). Parsing a sentence in a language is similar to computation, and the grammars described by Chomsky proved to both resemble and be equivalent in computational power to various machine models.
The hierarchy
The following table summarizes each of Chomsky's four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have. The classes are defined by the constraints on the productions rules.
Note that the set of grammars corresponding to recursive languages is not a member of this hierarchy; these would be properly between Type-0 and Type-1.
Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive and every recursive language is recursively enumerable. These are all proper inclusions, meaning that there exist recursively enumerable languages that are not context-sensitive, context-sensitive languages that are not context-free and context-free languages that are not regular.
Regular (Type-3) grammars
Type-3 grammars generate the regular languages. Such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal, in which case the grammar is right regular. Alternatively, all the rules can have their right-hand sides consist of a single terminal, possibly preceded by a single nonterminal (left regular). These generate the same languages. However, if left-regular rules and right-regular rules are combined, the language need no longer be regular. The rule is also allowed here if does not appear on the right side of any rule. These languages are exactly all languages that can be decided by a finite-state automaton. Additionally, this family of formal languages can be obtained by regular expressions. Regular languages are commonly used to define search patterns and the lexical structure of programming languages.
For example, the regular language is generated by the Type-3 grammar with the productions being the following.
Context-free (Type-2) grammars
Type-2 grammars generate the context-free languages. These are defined by rules of the form with being a nonterminal and being a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton. Context-free languages—or rather its subset of deterministic context-free languages—are the theoretical basis for the phrase structure of most programming languages, though their syntax also includes context-sensitive name resolution due to declarations and scope. Often a subset of grammars is used to make parsing easier, such as by an LL parser.
For example, the context-free language is generated by the Type-2 grammar with the productions being the following.
The language is context-free but not regular (by the pumping lemma for regular languages).
Context-sensitive (Type-1) grammars
Type-1 grammars generate context-sensitive languages. These grammars have rules of the form with a nonterminal and , and strings of terminals and/or nonterminals. The strings and may be empty, but must be nonempty. The rule is allowed if does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton (a nondeterministic Turing machine whose tape is bounded by a constant times the length of the input.)
For example, the context-sensitive language is generated by the Type-1 grammar with the productions being the following.
The language is context-sensitive but not context-free (by the pumping lemma for context-free languages).
A proof that this grammar generates is sketched in the article on Context-sensitive grammars.
Recursively enumerable (Type-0) grammars
Type-0 grammars include all formal grammars. There are no constraints on the productions rules. They generate exactly all languages that can be recognized by a Turing machine, thus any language that is possible to be generated can be generated by a Type-0 grammar. These languages are also known as the recursively enumerable or Turing-recognizable languages. Note that this is different from the recursive languages, which can be decided by an always-halting Turing machine.
| Mathematics | Automata theory | null |
6014 | https://en.wikipedia.org/wiki/Cathode-ray%20tube | Cathode-ray tube | A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms on an oscilloscope, a frame of video on an analog television set (TV), digital raster graphics on a computer monitor, or other phenomena like radar targets. A CRT in a TV is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.
In CRT TVs and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and TVs the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.
The tube is a glass envelope which is heavy, fragile, and long from front screen face to rear end. Its interior must be close to a vacuum to prevent the emitted electrons from colliding with air molecules and scattering before they hit the tube's face. Thus, the interior is evacuated to less than a millionth of atmospheric pressure. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. This tube makes up most of the weight of CRT TVs and computer monitors.
Since the early 2010s, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and thinner. Flat-panel displays can also be made in very large sizes whereas was about the largest size of a CRT.
A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons.
History
Discoveries
Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were travelling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the mass-to-charge ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which had already been named electrons by Irish physicist George Johnstone Stoney in 1891.
The earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. The Braun tube became the foundation of 20th century TV.
In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature, in which he described how "distant electric vision" could be achieved by using a cathode-ray tube (or "Braun" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society.
The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode.
Development
In 1926, Kenjiro Takayanagi demonstrated a CRT TV receiver with a mechanical video camera that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display.
In 1927, Philo Farnsworth created a TV prototype.
The CRT was named in 1929 by inventor Vladimir K. Zworykin. He was subsequently hired by RCA, which was granted a trademark for the term "Kinescope", RCA's term for a CRT, in 1932; it voluntarily released the term to the public domain in 1950.
In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of TV.
The first commercially made electronic TV sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934.
In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created.
From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT.
In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be mass produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well.
The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 25 inches by 1974, 30 inches by 1980, 35 inches by 1985, and 43 inches by 1989. However, experimental 31 inch CRTs were made as far back as 1938.
In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs.
1968 marked the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction.
In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass.
In 1990, the first CRT with HD resolution, the Sony KW-3600HD, was released to the market. It is considered to be "historical material" by Japan's national museum.
The Sony KWP-5500HD, an HD CRT projection TV, was released in 1992.
In the mid-1990s, some 160 million CRTs were made per year.
In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays.
The last large-scale manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time.
In 2012, Samsung SDI and several other major companies were fined by the European Commission for price fixing of TV cathode-ray tubes.
The same occurred in 2015 in the US and in Canada in 2018.
Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units.
Decline
Beginning in the late 1990s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004, Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in some markets in 2005. Samsung SDI stopped CRT production in 2012.
Despite being a mainstay of display technology for decades, CRT-based computer monitors and TVs are now obsolete. Demand for CRT screens dropped in the late 2000s. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets and vintage enthusiasts once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as advantages.
Some industries still use CRTs because it is too much effort, downtime, or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates. They are also used in some military equipment for similar reasons. , at least one company manufactures new CRTs for these markets.
A popular consumer usage of CRTs is for retrogaming. Some games are impossible to play without CRT display hardware. Light guns only work on CRTs because they depend on the progressive timing properties of CRTs. Another reason people use CRTs due to the natural blending of these displays. Some games designed for CRT displays exploit this, which allows them to look more aesthetically pleasing on these displays.
Constructions
Body
The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope.
The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties.
The optical properties of the glass used on the screen affect color reproduction and purity in color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546 nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside.
The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with fused necks, for Color CRTs, or as bulbs made up of a fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern.
CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass.
The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays, as it doesn't brown unlike glass containing lead. Another glass formulation uses 2–3% of lead on the screen. Alternatively zirconium can also be used on the screen in combination with barium, instead of lead.
Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24–32 kV, while for monochrome it is usually 21 or 24.5 kV, limiting the size of monochrome CRTs to 21 inches, or ~1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating.
The leaded glass in the funnels of CRTs may contain 21–25% of lead oxide (PbO), The neck may contain 30–40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s. Before this, CRTs used lead on the faceplate.
Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown Pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation.
The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is 5–10 nF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury.
The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs. Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved.
Size and weight
The size of a CRT can be measured by the screen's entire area (or face diagonal) or alternatively by only its viewable area (or diagonal) that is coated by phosphor and surrounded by black edges.
While the viewable area may be rectangular, the edges of the CRT may have a curvature (e.g. black stripe CRTs, first made by Toshiba in 1972) or the edges may be black and truly flat (e.g. Flatron CRTs), or the viewable area may follow the curvature of the edges of the CRT (with or without black edges or curved edges).
Small CRTs below 3 inches were made for handheld TVs such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat.
Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT and limits its practical size (see ). The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel can vary in thickness, to join the thin neck with the thick screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass.
Anode
The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback.
For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs.
The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT.
The anode cap connection in modern CRTs must be able to handle up to 55–60kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge.
The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40–49% of Nickel and 3–6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip.
The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field.
The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays.
Electron gun
The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called "winding", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons.
Construction and method of operation
The electron gun has an indirectly heated hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5–2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but does not touch it; the cathode has its own separate electrical connection. The cathode is a material coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it.
There are several short circuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering.
The cathode is a layer of barium oxide which is coated on a piece of nickel for electrical and mechanical support. The barium oxide must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation is done when forming the vacuum (described in ). After activation, the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800–1000°C, at which point it starts shedding electrons.
Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%.
The amount of electrons generated by the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic.
A negative current is applied to the first (control) grid (G1) to converge the electrons from the hot cathode, creating an electron beam. G1 in practice is a Wehnelt cylinder. The brightness of the screen is not controlled by varying the anode voltage nor the electron beam current (they are never varied) despite them having an influence on image brightness, rather image brightness is controlled by varying the difference in voltage between the cathode and the G1 control grid. The second (screen) grid of the gun (G2) then accelerates the electrons towards the screen using several hundred DC volts. Then a third grid (G3) electrostatically focuses the electron beam before it is deflected and later accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun.
However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (≈600–8000 V) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT.
There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid.
During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image.
The electron beam may be affected by the Earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk.
Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage.
After the CRTs were manufactured, they were aged to allow cathode emission to stabilize.
The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40–170 V per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30 MHz of bandwidth can usually provide 720p or 1080i resolution, while 20 MHz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus creating shades of colors which create the image line by line and this can also affect the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light.
Gamma
CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity).
Deflection
There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2 kV, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT.
Magnetic deflection
Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue.
The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15–240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance.
Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require ~24 volts while the horizontal deflection coils require ~120 volts to operate.
The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen.
Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness.
Electrostatic deflection
Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle.
Burn-in
Burn-in is when images are physically "burned" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a "ghost" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays.
Evacuation
The CRT's partial vacuum of to or less is evacuated or exhausted in a ~375–475 °C oven in a process called baking or bake-out. The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected ("sealed or tipped off") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems.
Rebuilding
CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the disassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013.
Reactivation
Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun manually. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short.
Phosphors
Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam.
The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium in the form of Zinc beryllium silicate, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching.
Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yttrium oxide for red and yttrium silicide for blue in beam index tubes, while examples of earlier phosphors are copper cadmium sulfide for red,
SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors.
The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kV. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape.
Phosphor persistence
Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates.
Limitations and workarounds
Blooming
Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current.
Doming
Doming is a phenomenon found on some CRT TVs in which parts of the shadow mask become heated. In TVs that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns.
During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening.
Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast.
Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles do not block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through.
High voltage
Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions.
Size
A practical limit on the size of a CRT is the weight of the thick glass needed to safely sustain its vacuum, since a CRT's exterior is exposed to the full atmospheric pressure, which for instance totals on a 27-inch (400 in2) screen. For example, the large 43-inch Sony PVM-4300 weighs , much heavier than 32-inch CRTs (up to ) and 19-inch CRTs (up to ). Much lighter flat panel TVs are only ~ for 32-inch and for 19-inch.
Size is also limited by anode voltage, as it would require a higher dielectric strength to prevent arcing and the electrical losses and ozone generation it causes, without sacrificing image brightness.
Shadow masks also become more difficult to make with increasing resolution and size.
Limits imposed by deflection
At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass increases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress.
Comparison with other technologies
LCD advantages over CRT: Lower bulk, power consumption and heat generation, higher refresh rates (up to 360 Hz)
CRT advantages over LCD: Better color reproduction, no motion blur, multisyncing available in many monitors, no input lag
OLED advantages over CRT: Lower bulk, similar color reproduction, higher contrast ratios, similar refresh rates (over 60 Hz, up to 120 Hz) except for computer monitors.
On CRTs, refresh rate depends on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs.
CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are often preferred for playing video games made in the early 2000s and prior in spite of their bulk, weight and heat generation, with some pieces of technology requiring a CRT to function due to not being built with the functionality of modern displays in mind.
CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist.
Types
CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes were of higher resolution and when used in computer monitors sometimes had adjustable overscan, or sometimes underscan.
Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes.
Monochrome CRTs
If the CRT is a black and white (B&W or monochrome) CRT, there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen.
The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen.
The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit.
Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image.
When a monochrome CRT is shut off, the screen itself retracts to a small, white dot in the center, along with the phosphors shutting down, shot by the electron gun; it sometimes takes a while for it to go away.
Color CRTs
Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs).
Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). The triangular configuration is often called delta-gun, based on its relation to the shape of the Greek letter delta (Δ). The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor.
A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually inch behind the screen.
Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen.
The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba).
The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum, deposited after the phosphor coating is applied, facing the electron gun. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag.
Shadow mask
The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at ~435 °C of the frit seal between the faceplate and the funnel of the CRT.
Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape.
Screen manufacture
Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons striking the wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness.
Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used.
The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a "lighthouse" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. Such technology was sold by Toshiba under the Microfilter brand name. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size.
After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65–88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and the edges of funnel of the CRT that mate with the screen, are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435–475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT.
Convergence and purity in color CRTs
Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes.
Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of "static" and "dynamic" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color "shadows" or "ghosts" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen.
The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one pair of rings has 2 poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary.
On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. In this case the deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask are not spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen is not spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen.
The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate.
Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use "self-convergence" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of "nonuniform" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence.
Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control.
Alternatively, the guns can be aligned with one another (converged) using convergence rings placed right outside the neck; with one ring per gun. The rings can have north and south poles. There can be 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence.
Magnetic shielding and degaussing
If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of "color purity" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT.
Color CRT displays in TV sets and computer monitors often have a built-in degaussing (demagnetizing) coil mounted around the perimeter of the CRT face. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the coil which fades to zero over a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect.
Resolution
Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern.
Projection CRTs
Projection CRTs were used in CRT projectors and CRT rear-projection TVs, and are usually small (being 7–9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2 mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933.
Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT.
Beam-index tube
Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it did not use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that was not covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings.
Flat CRTs
Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP Displays) which is truly flat on the outside and inside, but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith in 1986, and used flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming. LG's Flatron technology is based on this technology developed by Zenith, now a subsidiary of LG.
Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80, and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells.
Radar CRTs
Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The deflection yoke rotated, causing the beam to rotate in a circular fashion. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were "written" by the beam, until the beam re-struck the phosphor.
Oscilloscope CRTs
In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with TV and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. TVs use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15 kV. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles.
Microchannel plate
When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well.
Graticules
Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility.
Image storage tubes
These are found in analog phosphor storage oscilloscopes. These are distinct from digital storage oscilloscopes which rely on solid state digital memory to store the image.
Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen.
When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube.
Vector monitors
Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as Asteroids.
They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits.
Data storage tubes
The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen.
Cat's eye
In some vacuum tube radio sets, a "Magic Eye" or "Tuning Eye" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment.
Charactrons
Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems.
Nimo
Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronic Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon.
Flood-beam CRT
Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011.
Print-head CRT
CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image.
Zeus – thin CRT display
In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the Zeus display, which contained CRT-like functionality in a flat-panel display. The cathode of this display was mounted under the front of the display, and the electrons from the cathode would be directed to the back to the display where they would stay until extracted by electrodes near the front of the display, and directed to the front of the display which had phosphor dots. The devices were demonstrated but never marketed.
Slimmer CRT
Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A flat CRT has a depth. The depth of Superslim was and Ultraslim was .
Health concerns
Ionizing radiation
CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered to be not harmful. The Food and Drug Administration regulations in are used to strictly limit, for instance, TV receivers to 0.5 milliroentgens per hour at a distance of from any external surface; since 2007, most CRTs have emissions that fall well below this limit. Note that the roentgen is an outdated unit and does not account for dose absorption. The conversion rate is about .877 roentgen per rem. Assuming that the viewer absorbed the entire dose (which is unlikely), and that they watched TV for 2 hours a day, a .5 milliroentgen hourly dose would increase the viewers yearly dose by 320 millirem. For comparison, the average background radiation in the United States is 310 millirem a year. Negative effects of chronic radiation are not generally noticeable until doses over 20,000 millirem.
The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate "soft" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs.
Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting "X-radiation in excess of desirable levels". It was later found that TV sets from all manufacturers were also emitting radiation. This caused TV industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid "prolonged exposure" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about "radioactive" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to "go into homes to test all of the nation's 15 million color sets and to install radiation devices in them". The FDA eventually began regulating radiation emissions from all electronic products in the US.
Toxicity
Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware).
Flicker
At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most TVs run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL TVs that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred.
High-frequency audible noise
50 Hz/60 Hz CRTs used for TV operate with horizontal scanning frequencies of 15,750 and 15,734.27 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT TV. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads.
This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz).
Implosion
If the glass wall is damaged, atmospheric pressure can implode the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in TVs and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid injury.
Implosion protection
Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a "cataract", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm2.
Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT; the band cools afterwards, shrinking in size and putting the glass under compression, which strengthens the glass and reduces the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode.
The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel.
Alternatively the compression caused by the rim band may be used to cause any cracks in the screen to propagate laterally at a high speed so that they reach the funnel and fully penetrate it before they fully penetrate the screen. This is possible because the funnel has walls that are thinner than the screen. Fully penetrating the funnel first allows air to enter the CRT from a short distance behind the screen, and prevent an implosion by ensuring the screen is fully penetrated by the cracks and breaks only when the CRT already has air.
Electric shock
To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1–1.5 kV of anode voltage per inch.
Security concerns
Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general.
Recycling
Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment.
As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and , both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage.
Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System.
In Europe, disposal of CRT TVs and monitors is covered by the WEEE Directive.
Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7 grams of phosphor.
The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire.
Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting.
A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled.
| Technology | Media and communication | null |
6015 | https://en.wikipedia.org/wiki/Crystal | Crystal | A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. The scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification.
The word crystal derives from the Ancient Greek word (), meaning both "ice" and "rock crystal", from (), "icy cold, frost".
Examples of large crystals include snowflakes, diamonds, and table salt. Most inorganic solids are not crystals but polycrystals, i.e. many microscopic crystals fused together into a single solid. Polycrystals include most metals, rocks, ceramics, and ice. A third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics.
Despite the name, lead crystal, crystal glass, and related products are not crystals, but rather types of glass, i.e. amorphous solids.
Crystals, or crystalline solids, are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements.
Crystal structure (microscopic)
The scientific definition of a "crystal" is based on the microscopic arrangement of atoms inside it, called the crystal structure. A crystal is a solid where the atoms form a periodic arrangement. (Quasicrystals are an exception, see below).
Not all solids are crystals. For example, when liquid water starts freezing, the phase change begins with small ice crystals that grow until they fuse, forming a polycrystalline structure. In the final block of ice, each of the small crystals (called "crystallites" or "grains") is a true crystal with a periodic arrangement of atoms, but the whole polycrystal does not have a periodic arrangement of atoms, because the periodic pattern is broken at the grain boundaries. Most macroscopic inorganic solids are polycrystalline, including almost all metals, ceramics, ice, rocks, etc. Solids that are neither crystalline nor polycrystalline, such as glass, are called amorphous solids, also called glassy, vitreous, or noncrystalline. These have no periodic order, even microscopically. There are distinct differences between crystalline solids and amorphous solids: most notably, the process of forming a glass does not release the latent heat of fusion, but forming a crystal does.
A crystal structure (an arrangement of atoms in a crystal) is characterized by its unit cell, a small imaginary box containing one or more atoms in a specific spatial arrangement. The unit cells are stacked in three-dimensional space to form the crystal.
The symmetry of a crystal is constrained by the requirement that the unit cells stack perfectly with no gaps. There are 219 possible crystal symmetries (230 is commonly cited, but this treats chiral equivalents as separate entities), called crystallographic space groups. These are grouped into 7 crystal systems, such as cubic crystal system (where the crystals may form cubes or rectangular boxes, such as halite shown at right) or hexagonal crystal system (where the crystals may form hexagons, such as ordinary water ice).
Crystal faces, shapes and crystallographic forms
Crystals are commonly recognized, macroscopically, by their shape, consisting of flat faces with sharp angles. These shape characteristics are not necessary for a crystal—a crystal is scientifically defined by its microscopic atomic arrangement, not its macroscopic shape—but the characteristic macroscopic shape is often present and easy to see.
Euhedral crystals are those that have obvious, well-formed flat faces. Anhedral crystals do not, usually because the crystal is one grain in a polycrystalline solid.
The flat faces (also called facets) of a euhedral crystal are oriented in a specific way relative to the underlying atomic arrangement of the crystal: they are planes of relatively low Miller index. This occurs because some surface orientations are more stable than others (lower surface energy). As a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces. Therefore, the flat surfaces tend to grow larger and smoother, until the whole crystal surface consists of these plane surfaces. (See diagram on right.)
One of the oldest techniques in the science of crystallography consists of measuring the three-dimensional orientations of the faces of a crystal, and using them to infer the underlying crystal symmetry.
A crystal's crystallographic forms are sets of possible faces of the crystal that are related by one of the symmetries of the crystal. For example, crystals of galena often take the shape of cubes, and the six faces of the cube belong to a crystallographic form that displays one of the symmetries of the isometric crystal system. Galena also sometimes crystallizes as octahedrons, and the eight faces of the octahedron belong to another crystallographic form reflecting a different symmetry of the isometric system. A crystallographic form is described by placing the Miller indices of one of its faces within brackets. For example, the octahedral form is written as {111}, and the other faces in the form are implied by the symmetry of the crystal.
Forms may be closed, meaning that the form can completely enclose a volume of space, or open, meaning that it cannot. The cubic and octahedral forms are examples of closed forms. All the forms of the isometric system are closed, while all the forms of the monoclinic and triclinic crystal systems are open. A crystal's faces may all belong to the same closed form, or they may be a combination of multiple open or closed forms.
A crystal's habit is its visible external shape. This is determined by the crystal structure (which restricts the possible facet orientations), the specific crystal chemistry and bonding (which may favor some facet types over others), and the conditions under which the crystal formed.
Occurrence in nature
Rocks
By volume and weight, the largest concentrations of crystals in the Earth are part of its solid bedrock. Crystals found in rocks typically range in size from a fraction of a millimetre to several centimetres across, although exceptionally large crystals are occasionally found. , the world's largest known naturally occurring crystal is a crystal of beryl from Malakialina, Madagascar, long and in diameter, and weighing .
Some crystals have formed by magmatic and metamorphic processes, giving origin to large masses of crystalline rock. The vast majority of igneous rocks are formed from molten magma and the degree of crystallization depends primarily on the conditions under which they solidified. Such rocks as granite, which have cooled very slowly and under great pressures, have completely crystallized; but many kinds of lava were poured out at the surface and cooled very rapidly, and in this latter group a small amount of amorphous or glassy matter is common. Other crystalline rocks, the metamorphic rocks such as marbles, mica-schists and quartzites, are recrystallized. This means that they were at first fragmental rocks like limestone, shale and sandstone and have never been in a molten condition nor entirely in solution, but the high temperature and pressure conditions of metamorphism have acted on them by erasing their original structures and inducing recrystallization in the solid state.
Other rock crystals have formed out of precipitation from fluids, commonly water, to form druses or quartz veins. Evaporites such as halite, gypsum and some limestones have been deposited from aqueous solution, mostly owing to evaporation in arid climates.
Ice
Water-based ice in the form of snow, sea ice, and glaciers are common crystalline/polycrystalline structures on Earth and other planets. A single snowflake is a single crystal or a collection of crystals, while an ice cube is a polycrystal. Ice crystals may form from cooling liquid water below its freezing point, such as ice cubes or a frozen lake. Frost, snowflakes, or small ice crystals suspended in the air (ice fog) more often grow from a supersaturated gaseous-solution of water vapor and air, when the temperature of the air drops below its dew point, without passing through a liquid state. Another unusual property of water is that it expands rather than contracts when it crystallizes.
Organigenic crystals
Many living organisms are able to produce crystals grown from an aqueous solution, for example calcite and aragonite in the case of most molluscs or hydroxylapatite in the case of bones and teeth in vertebrates.
Polymorphism and allotropy
The same group of atoms can often solidify in many different ways. Polymorphism is the ability of a solid to exist in more than one crystal form. For example, water ice is ordinarily found in the hexagonal form Ice Ih, but can also exist as the cubic Ice Ic, the rhombohedral ice II, and many other forms. The different polymorphs are usually called different phases.
In addition, the same atoms may be able to form noncrystalline phases. For example, water can also form amorphous ice, while SiO2 can form both fused silica (an amorphous glass) and quartz (a crystal). Likewise, if a substance can form crystals, it can also form polycrystals.
For pure chemical elements, polymorphism is known as allotropy. For example, diamond and graphite are two crystalline forms of carbon, while amorphous carbon is a noncrystalline form. Polymorphs, despite having the same atoms, may have very different properties. For example, diamond is the hardest substance known, while graphite is so soft that it is used as a lubricant. Chocolate can form six different types of crystals, but only one has the suitable hardness and melting point for candy bars and confections. Polymorphism in steel is responsible for its ability to be heat treated, giving it a wide range of properties.
Polyamorphism is a similar phenomenon where the same atoms can exist in more than one amorphous solid form.
Crystallization
Crystallization is the process of forming a crystalline structure from a fluid or from materials dissolved in a fluid. (More rarely, crystals may be deposited directly from gas; see: epitaxy and frost.)
Crystallization is a complex and extensively-studied field, because depending on the conditions, a single fluid can solidify into many different possible forms. It can form a single crystal, perhaps with various possible phases, stoichiometries, impurities, defects, and habits. Or, it can form a polycrystal, with various possibilities for the size, arrangement, orientation, and phase of its grains. The final form of the solid is determined by the conditions under which the fluid is being solidified, such as the chemistry of the fluid, the ambient pressure, the temperature, and the speed with which all these parameters are changing.
Specific industrial techniques to produce large single crystals (called boules) include the Czochralski process and the Bridgman technique. Other less exotic methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization.
Large single crystals can be created by geological processes. For example, selenite crystals in excess of 10 m are found in the Cave of the Crystals in Naica, Mexico. For more details on geological crystal formation, see above.
Crystals can also be formed by biological processes, see above. Conversely, some organisms have special techniques to prevent crystallization from occurring, such as antifreeze proteins.
Defects, impurities, and twinning
An ideal crystal has every atom in a perfect, exactly repeating pattern. However, in reality, most crystalline materials have a variety of crystallographic defects: places where the crystal's pattern is interrupted. The types and structures of these defects may have a profound effect on the properties of the materials.
A few examples of crystallographic defects include vacancy defects (an empty space where an atom should fit), interstitial defects (an extra atom squeezed in where it does not fit), and dislocations (see figure at right). Dislocations are especially important in materials science, because they help determine the mechanical strength of materials.
Another common type of crystallographic defect is an impurity, meaning that the "wrong" type of atom is present in a crystal. For example, a perfect crystal of diamond would only contain carbon atoms, but a real crystal might perhaps contain a few boron atoms as well. These boron impurities change the diamond's color to slightly blue. Likewise, the only difference between ruby and sapphire is the type of impurities present in a corundum crystal.
In semiconductors, a special type of impurity, called a dopant, drastically changes the crystal's electrical properties. Semiconductor devices, such as transistors, are made possible largely by putting different semiconductor dopants into different places, in specific patterns.
Twinning is a phenomenon somewhere between a crystallographic defect and a grain boundary. Like a grain boundary, a twin boundary has different crystal orientations on its two sides. But unlike a grain boundary, the orientations are not random, but related in a specific, mirror-image way.
Mosaicity is a spread of crystal plane orientations. A mosaic crystal consists of smaller crystalline units that are somewhat misaligned with respect to each other.
Chemical bonds
In general, solids can be held together by various types of chemical bonds, such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. None of these are necessarily crystalline or non-crystalline. However, there are some general trends as follows:
Metals crystallize rapidly and are almost always polycrystalline, though there are exceptions like amorphous metal and single-crystal metals. The latter are grown synthetically, for example, fighter-jet turbines are typically made by first growing a single crystal of titanium alloy, increasing its strength and melting point over polycrystalline titanium. A small piece of metal may naturally form into a single crystal, such as Type 2 telluric iron, but larger pieces generally do not unless extremely slow cooling occurs. For example, iron meteorites are often composed of single crystal, or many large crystals that may be several meters in size, due to very slow cooling in the vacuum of space. The slow cooling may allow the precipitation of a separate phase within the crystal lattice, which form at specific angles determined by the lattice, called Widmanstatten patterns.
Ionic compounds typically form when a metal reacts with a non-metal, such as sodium with chlorine. These often form substances called salts, such as sodium chloride (table salt) or potassium nitrate (saltpeter), with crystals that are often brittle and cleave relatively easily. Ionic materials are usually crystalline or polycrystalline. In practice, large salt crystals can be created by solidification of a molten fluid, or by crystallization out of a solution. Some ionic compounds can be very hard, such as oxides like aluminium oxide found in many gemstones such as ruby and synthetic sapphire.
Covalently bonded solids (sometimes called covalent network solids) are typically formed from one or more non-metals, such as carbon or silicon and oxygen, and are often very hard, rigid, and brittle. These are also very common, notable examples being diamond and quartz respectively.
Weak van der Waals forces also help hold together certain crystals, such as crystalline molecular solids, as well as the interlayer bonding in graphite. Substances such as fats, lipids and wax form molecular bonds because the large molecules do not pack as tightly as atomic bonds. This leads to crystals that are much softer and more easily pulled apart or broken. Common examples include chocolates, candles, or viruses. Water ice and dry ice are examples of other materials with molecular bonding.Polymer materials generally will form crystalline regions, but the lengths of the molecules usually prevent complete crystallization—and sometimes polymers are completely amorphous.
Quasicrystals
A quasicrystal consists of arrays of atoms that are ordered but not strictly periodic. They have many attributes in common with ordinary crystals, such as displaying a discrete pattern in x-ray diffraction, and the ability to form shapes with smooth, flat faces.
Quasicrystals are most famous for their ability to show five-fold symmetry, which is impossible for an ordinary periodic crystal (see crystallographic restriction theorem).
The International Union of Crystallography has redefined the term "crystal" to include both ordinary periodic crystals and quasicrystals ("any solid having an essentially discrete diffraction diagram").
Quasicrystals, first discovered in 1982, are quite rare in practice. Only about 100 solids are known to form quasicrystals, compared to about 400,000 periodic crystals known in 2004. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for the discovery of quasicrystals.
Special properties from anisotropy
Crystals can have certain special electrical, optical, and mechanical properties that glass and polycrystals normally cannot. These properties are related to the anisotropy of the crystal, i.e. the lack of rotational symmetry in its atomic arrangement. One such property is the piezoelectric effect, where a voltage across the crystal can shrink or stretch it. Another is birefringence, where a double image appears when looking through a crystal. Moreover, various properties of a crystal, including electrical conductivity, electrical permittivity, and Young's modulus, may be different in different directions in a crystal. For example, graphite crystals consist of a stack of sheets, and although each individual sheet is mechanically very strong, the sheets are rather loosely bound to each other. Therefore, the mechanical strength of the material is quite different depending on the direction of stress.
Not all crystals have all of these properties. Conversely, these properties are not quite exclusive to crystals. They can appear in glasses or polycrystals that have been made anisotropic by working or stress—for example, stress-induced birefringence.
Crystallography
Crystallography is the science of measuring the crystal structure (in other words, the atomic arrangement) of a crystal. One widely used crystallography technique is X-ray diffraction. Large numbers of known crystal structures are stored in crystallographic databases.
Image gallery
| Physical sciences | Crystallography | null |
6016 | https://en.wikipedia.org/wiki/Cytosine | Cytosine | Cytosine () (symbol C or Cyt) is one of the four nucleotide bases found in DNA and RNA, along with adenine, guanine, and thymine (uracil in RNA). It is a pyrimidine derivative, with a heterocyclic aromatic ring and two substituents attached (an amine group at position 4 and a keto group at position 2). The nucleoside of cytosine is cytidine. In Watson–Crick base pairing, it forms three hydrogen bonds with guanine.
History
Cytosine was discovered and named by Albrecht Kossel and Albert Neumann in 1894 when it was hydrolyzed from calf thymus tissues. A structure was proposed in 1903, and was synthesized (and thus confirmed) in the laboratory in the same year.
In 1998, cytosine was used in an early demonstration of quantum information processing when Oxford University researchers implemented the Deutsch–Jozsa algorithm on a two qubit nuclear magnetic resonance quantum computer (NMRQC).
In March 2015, NASA scientists reported the formation of cytosine, along with uracil and thymine, from pyrimidine under the space-like laboratory conditions, which is of interest because pyrimidine has been found in meteorites although its origin is unknown.
Chemical reactions
Cytosine can be found as part of DNA, as part of RNA, or as a part of a nucleotide. As cytidine triphosphate (CTP), it can act as a co-factor to enzymes, and can transfer a phosphate to convert adenosine diphosphate (ADP) to adenosine triphosphate (ATP).
In DNA and RNA, cytosine is paired with guanine. However, it is inherently unstable, and can change into uracil (spontaneous deamination). This can lead to a point mutation if not repaired by the DNA repair enzymes such as uracil glycosylase, which cleaves a uracil in DNA.
Cytosine can also be methylated into 5-methylcytosine by an enzyme called DNA methyltransferase or be methylated and hydroxylated to make 5-hydroxymethylcytosine. The difference in rates of deamination of cytosine and 5-methylcytosine (to uracil and thymine) forms the basis of bisulfite sequencing.
Biological function
When found third in a codon of RNA, cytosine is synonymous with uracil, as they are interchangeable as the third base.
When found as the second base in a codon, the third is always interchangeable. For example, UCU, UCC, UCA and UCG are all serine, regardless of the third base.
Active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have both beneficial and detrimental implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood.
Theoretical aspects
Until October 2021, Cytosine had not been found in meteorites, which suggested the first strands of RNA and DNA had to look elsewhere to obtain this building block. Cytosine likely formed within some meteorite parent bodies, however did not persist within these bodies due to an effective deamination reaction into uracil.
In October 2021, Cytosine was announced as having been found in meteorites by researchers in a joint Japan/NASA project, that used novel methods of detection which avoided damaging nucleotides as they were extracted from meteorites.
| Biology and health sciences | Nucleic acids | Biology |
6019 | https://en.wikipedia.org/wiki/Computational%20chemistry | Computational chemistry | Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena.
Overview
Computational chemistry differs from theoretical chemistry, which involves a mathematical description of chemistry. However, computational chemistry involves the usage of computer programs and additional mathematical skills in order to accurately model various chemical problems. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions.
Historically, computational chemistry has had two different aspects:
Computational studies, used to find a starting point for a laboratory synthesis or to assist in understanding experimental data, such as the position and source of spectroscopic peaks.
Computational studies, used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms not readily studied via experiments.
These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms.
History
Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow.
With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One significant advancement was marked by Clemens C. J. Roothaan's 1951 paper in the Reviews of Modern Physics. This paper focused largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals). For many years, it was the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer.
In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.
In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger.
One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980.
Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems".
Applications
There are several fields within computational chemistry.
The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied.
Storing and searching for data on chemical entities (see chemical databases).
Identifying correlations between chemical structures and properties (see quantitative structure–property relationship (QSPR) and quantitative structure–activity relationship (QSAR)).
Computational approaches to help in the efficient synthesis of compounds.
Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis).
These fields can give rise to several applications as shown below.
Catalysis
Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties.
Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.
Drug development
Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.
Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.
Computational chemistry databases
Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.
Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.
Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following.
BindingDB: Contains experimental information about protein-small molecule interactions.
RCSB: Stores publicly available 3D models of macromolecules (proteins, nucleic acids) and small molecules (drugs, inhibitors)
ChEMBL: Contains data from research on drug development such as assay results.
DrugBank: Data about mechanisms of drugs can be found here.
Methods
Ab initio method
The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called ab initio methods. A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made).
Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz.
A common type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit.
Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones.
In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used.
The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface.
Computational thermochemistry
A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods.
Chemical dynamics
After the electronic and nuclear variables are separated within the Born–Oppenheimer representation), the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms.
The most popular methods for propagating the wave packet associated to the molecular geometry are:
the Chebyshev (real) polynomial,
the multi-configuration time-dependent Hartree method (MCTDH),
the semiclassical method
and the split operator technique explained below.
Split operator technique
How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into two different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution.
This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation.
The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.
There are ways to reduce this error, which include taking an average of two split equations.
Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy.
Computational chemists spend much time making systems calculated with split operator technique more accurate while minimizing the computational cost. Calculating methods is a massive challenge for many chemists trying to simulate molecules or chemical environments.
Density functional methods
Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods.
Semi-empirical methods
Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods.
Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe molecular mechanics.
Molecular mechanics
In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations.
The database of compounds used for parameterization, i.e. the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.
Molecular dynamics
Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion.
Monte Carlo
Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms.
Quantum mechanics/molecular mechanics (QM/MM)
QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes.
Quantum Computational Chemistry
Quantum computational chemistry aims to exploit quantum computing to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum computing methods to represent and process information, such as Hamiltonian operators.
Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions.
Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior.
While these techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments may lead to significant progress towards achieving more precise and resource-efficient quantum chemistry simulations.
Computational costs in chemistry algorithms
The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems. This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.
In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately.
Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.
Algorithmic complexity examples
The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It is important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry.
Molecular dynamics
Algorithm
Solves Newton's equations of motion for atoms and molecules.
Complexity
The standard pairwise interaction calculation in MD leads to an complexity for particles. This is because each particle interacts with every other particle, resulting in interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to or even by grouping distant particles and treating them as a single entity or using clever mathematical approximations.
Quantum mechanics/molecular mechanics (QM/MM)
Algorithm
Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.
Complexity
The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as , where is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.
Hartree-Fock method
Algorithm
Finds a single Fock state that minimizes the energy.
Complexity
NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as to depending on implementation, with being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.
Density functional theory
Algorithm
Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases.
Complexity
Traditional implementations of DFT typically scale as , mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.
Standard CCSD and CCSD(T) method
Algorithm
CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.
Complexity
CCSD
Scales as where is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation.
CCSD(T)
With the addition of perturbative triples, the complexity increases to . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations.
Linear-scaling CCSD(T) method
Algorithm
An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.
Complexity
Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy.
Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems.
For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations.
Accuracy
Computational chemistry is not an exact description of real-life chemistry, as the mathematical and physical models of nature can only provide an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme.
Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost.
Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT).
There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM).
Software packages
Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in:
Biomolecular modelling programs: proteins, nucleic acid.
Molecular mechanics programs.
Quantum chemistry and solid state-physics software supporting several methods.
Molecular design software
Semi-empirical programs.
Valence bond programs.
Specialized journals on computational chemistry
Annual Reports in Computational Chemistry
Computational and Theoretical Chemistry
Computational and Theoretical Polymer Science
Computers & Chemical Engineering
Journal of Chemical Information and Modeling
Journal of Chemical Software
Journal of Chemical Theory and Computation
Journal of Cheminformatics
Journal of Computational Chemistry
Journal of Computer Aided Chemistry
Journal of Computer Chemistry Japan
Journal of Computer-aided Molecular Design
Journal of Theoretical and Computational Chemistry
Molecular Informatics
Theoretical Chemistry Accounts
External links
NIST Computational Chemistry Comparison and Benchmark DataBase – Contains a database of thousands of computational and experimental results for hundreds of systems
American Chemical Society Division of Computers in Chemistry – American Chemical Society Computers in Chemistry Division, resources for grants, awards, contacts and meetings.
CSTB report Mathematical Research in Materials Science: Opportunities and Perspectives – CSTB Report
3.320 Atomistic Computer Modeling of Materials (SMA 5107) Free MIT Course
Chem 4021/8021 Computational Chemistry Free University of Minnesota Course
Technology Roadmap for Computational Chemistry
Applications of molecular and materials modelling.
Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology CSTB Report
MD and Computational Chemistry applications on GPUs
Susi Lehtola, Antti J. Karttunen:"Free and open source software for computational chemistry education", First published: 23 March 2022, https://doi.org/10.1002/wcms.1610 (Open Access)
CCL.NET: Computational Chemistry List, Ltd.
| Physical sciences | Subdisciplines | Chemistry |
6021 | https://en.wikipedia.org/wiki/C%20%28programming%20language%29 | C (programming language) | C (pronounced – like the letter c) is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems code (especially in kernels), device drivers, and protocol stacks, but its use in application software has been decreasing. C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems.
A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages, with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language. C has been standardized since 1989 by the American National Standards Institute (ANSI) and, subsequently, jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
C is an imperative procedural language, supporting structured programming, lexical variable scope, and recursion, with a static type system. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.
Since 2000, C has consistently ranked among the top four languages in the TIOBE index, a measure of the popularity of programming languages.
Overview
C is an imperative, procedural language in the ALGOL tradition. It has a static type system. In C, all executable code is contained within subroutines (also called "functions", though not in the sense of functional programming). Function parameters are passed by value, although arrays are passed as pointers, i.e. the address of the first item in the array. Pass-by-reference is simulated in C by explicitly passing pointers to the thing being referenced.
C program source text is free-form code. Semicolons terminate statements, while curly braces are used to group statements into blocks.
The C language also exhibits the following characteristics:
The language has a small, fixed number of keywords, including a full set of control flow primitives: if/else, for, do/while, while, and switch. User-defined names are not distinguished from keywords by any kind of sigil.
It has a large number of arithmetic, bitwise, and logic operators: , etc.
More than one assignment may be performed in a single statement.
Functions:
Function return values can be ignored, when not needed.
Function and data pointers permit ad hoc run-time polymorphism.
Functions may not be defined within the lexical scope of other functions.
Variables may be defined within a function, with scope.
A function may call itself, so recursion is supported.
Data typing is static, but weakly enforced; all data has a type, but implicit conversions are possible.
User-defined (typedef) and compound types are possible.
Heterogeneous aggregate data types (struct) allow related data elements to be accessed and assigned as a unit. The contents of whole structs cannot be compared using a single built-in operator (the elements must be compared individually).
Union is a structure with overlapping members; it allows multiple data types to share the same memory location.
Array indexing is a secondary notation, defined in terms of pointer arithmetic. Whole arrays cannot be assigned or compared using a single built-in operator. There is no "array" keyword in use or definition; instead, square brackets indicate arrays syntactically, for example month[11].
Enumerated types are possible with the enum keyword. They are freely interconvertible with integers.
Strings are not a distinct data type, but are conventionally implemented as null-terminated character arrays.
Low-level access to computer memory is possible by converting machine addresses to pointers.
Procedures (subroutines not returning values) are a special case of function, with an empty return type void.
Memory can be allocated to a program with calls to library routines.
A preprocessor performs macro definition, source code file inclusion, and conditional compilation.
There is a basic form of modularity: files can be compiled separately and linked together, with control over which functions and data objects are visible to other files via static and extern attributes.
Complex functionality such as I/O, string manipulation, and mathematical functions are consistently delegated to library routines.
The generated code after compilation has relatively straightforward needs on the underlying platform, which makes it suitable for creating operating systems and for use in embedded systems.
While C does not include certain features found in other languages (such as object orientation and garbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., the GLib Object System or the Boehm garbage collector).
Relations to other languages
Many later languages have borrowed directly or indirectly from C, including C++, C#, Unix's C shell, D, Go, Java, JavaScript (including transpilers), Julia, Limbo, LPC, Objective-C, Perl, PHP, Python, Ruby, Rust, Swift, Verilog and SystemVerilog (hardware description languages). These languages have drawn many of their control structures and other basic features from C. Most of them also express highly similar syntax to C, and they tend to combine the recognizable expression and statement syntax of C with underlying type systems, data models, and semantics that can be radically different.
History
Early developments
The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Dennis Ritchie and Ken Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to a PDP-11. The original PDP-11 version of Unix was also developed in assembly language.
B
Thompson wanted a programming language for developing utilities for the new platform. He first tried writing a Fortran compiler, but he soon gave up the idea and instead created a cut-down version of the recently developed systems programming language called BCPL. The official description of BCPL was not available at the time, and Thompson modified the syntax to be less 'wordy' and similar to a simplified ALGOL known as SMALGOL. He called the result B, describing it as "BCPL semantics with a lot of SMALGOL syntax". Like BCPL, B had a bootstrapping compiler to facilitate porting to new machines. Ultimately, few utilities were written in B because it was too slow and could not take advantage of PDP-11 features such as byte addressability.
New B and first C release
In 1971 Ritchie started to improve B, to use the features of the more-powerful PDP-11. A significant addition was a character data type. He called this New B (NB). Thompson started to use NB to write the Unix kernel, and his requirements shaped the direction of the language development. Through to 1972, richer types were added to the NB language: NB had arrays of int and char. Pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions were all also added. Arrays within expressions became pointers. A new compiler was written, and the language was renamed C.
The C compiler and some utilities made with it were included in Version 2 Unix, which is also known as Research Unix.
Structures and Unix kernel re-write
At Version 4 Unix, released in November 1973, the Unix kernel was extensively re-implemented in C. By this time, the C language had acquired some powerful features such as struct types.
The preprocessor was introduced around 1973 at the urging of Alan Snyder and also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL and PL/I. Its original version provided only included files and simple string replacements: #include and #define of parameterless macros. Soon after that, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation.
Unix was one of the first operating system kernels implemented in a language other than assembly. Earlier instances include the Multics system (which was written in PL/I) and Master Control Program (MCP) for the Burroughs B5000 (which was written in ALGOL) in 1961. In around 1977, Ritchie and Stephen C. Johnson made further changes to the language to facilitate portability of the Unix operating system. Johnson's Portable C Compiler served as the basis for several implementations of C on new platforms.
K&R C
In 1978 Brian Kernighan and Dennis Ritchie published the first edition of The C Programming Language. Known as K&R from the initials of its authors, the book served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as "K&R C". As this was released in 1978, it is now also referred to as C78. The second edition of the book covers the later ANSI C standard, described below.
K&R introduced several language features:
Standard I/O library
long int data type
unsigned int data type
Compound assignment operators of the form =op (such as =-) were changed to the form op= (that is, -=) to remove the semantic ambiguity created by constructs such as i=-10, which had been interpreted as i =- 10 (decrement i by 10) instead of the possibly intended i = -10 (let i be −10).
Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
In early versions of C, only functions that return types other than int must be declared if used before the function definition; functions used without prior declaration were presumed to return type int.
For example:
long some_function(); /* This is a function declaration, so the compiler can know the name and return type of this function. */
/* int */ other_function(); /* Another function declaration. Because this is an early version of C, there is an implicit 'int' type here. A comment shows where the explicit 'int' type specifier would be required in later versions. */
/* int */ calling_function() /* This is a function definition, including the body of the code following in the { curly brackets }. Because no return type is specified, the function implicitly returns an 'int' in this early version of C. */
{
long test1;
register /* int */ test2; /* Again, note that 'int' is not required here. The 'int' type specifier */
/* in the comment would be required in later versions of C. */
/* The 'register' keyword indicates to the compiler that this variable should */
/* ideally be stored in a register as opposed to within the stack frame. */
test1 = some_function();
if (test1 > 1)
test2 = 0;
else
test2 = other_function();
return test2;
}
The int type specifiers which are commented out could be omitted in K&R C, but are required in later standards.
Since K&R function declarations did not include any information about function arguments, function parameter type checks were not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if different calls to an external function used different numbers or types of arguments. Separate tools such as Unix's lint utility were developed that (among other things) could check for consistency of function use across multiple source files.
In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particular PCC) and some other vendors. These included:
void functions (i.e., functions with no return value)
functions returning struct or union types (previously only a single pointer, integer or float could be returned)
assignment for struct data types
enumerated types (previously, preprocessor definitions for integer fixed values were used, e.g. #define GREEN 3)
The large number of extensions and lack of agreement on a standard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.
ANSI C and ISO C
During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity began to increase significantly.
In 1983 the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to the IEEE working group 1003 to become the basis for the 1988 POSIX standard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to as ANSI C, Standard C, or sometimes C89.
In 1990 the ANSI C standard (with formatting changes) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990, which is sometimes called C90. Therefore, the terms "C89" and "C90" refer to the same programming language.
ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working group ISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.
One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such as function prototypes (borrowed from C++), void pointers, support for international character sets and locales, and preprocessor enhancements. Although the syntax for parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.
C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness.
In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.
After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.
C99
The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.
C99 introduced several new features, including inline functions, several new data types (including long long int and a complex type to represent complex numbers), variable-length arrays and flexible array members, improved support for IEEE 754 floating point, support for variadic macros (macros of variable arity), and support for one-line comments beginning with //, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.
C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has int implicitly assumed. A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. GCC, Solaris Studio, and other C compilers now support many or all of the new features of C99. The C compiler in Microsoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility with C++11.
In addition, the C99 standard requires support for identifiers using Unicode in the form of escaped characters (e.g. or ) and suggests support for raw Unicode names.
C11
Work began in 2007 on another revision of the C standard, informally called "C1X" until its official publication of ISO/IEC 9899:2011 on December 8, 2011. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.
The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro __STDC_VERSION__ is defined as 201112L to indicate that C11 support is available.
C17
C17 is an informal name for ISO/IEC 9899:2018, a standard for the C programming language published in June 2018. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro __STDC_VERSION__ is defined as 201710L to indicate that C17 support is available.
C23
C23 is an informal name for the current major C language standard revision. It was informally known as "C2X" through most of its development. C23 was published in October 2024 as ISO/IEC 9899:2024. The standard macro __STDC_VERSION__ is defined as 202311L to indicate that C23 support is available.
C2Y
C2Y is an informal name for the next major C language standard revision, after C23 (C2X), that is hoped to be released later in the 2020s decade, hence the '2' in "C2Y". An early working draft of C2Y was released in February 2024 as N3220 by the working group ISO/IEC JTC1/SC22/WG14.
Embedded C
Historically, embedded C programming requires non-standard extensions to the C language to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations.
In 2008, the C Standards Committee published a technical report extending the C language to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
Syntax
C has a formal grammar specified by the C standard. Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters /* and */, or (since C99) following // until the end of the line. Comments delimited by /* and */ do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear inside string or character literals.
C source files contain declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as struct, union, and enum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as char and int specify built-in types. Sections of code are enclosed in braces ({ and }, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures.
As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if ... [else] conditional execution and by do ... while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structured goto statement which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression. Different from many other languages, control-flow will fall through to the next case unless terminated by a break.
Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&, ||, ?: and the comma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.
Kernighan and Ritchie say in the Introduction of The C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better." The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.
Character set
The basic C source character set includes the following characters:
Lowercase and uppercase letters of the ISO basic Latin alphabet: a–z, A–Z
Decimal digits: 0–9
Graphic characters: ! " # % & ' ( ) * + , - . / : ; < = > ? [ \ ] ^ _ { | } ~
Whitespace characters: space, horizontal tab, vertical tab, form feed, newline
The newline character indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as such.
Additional multi-byte encoded characters may be used in string literals, but they are not entirely portable. The latest C standard (C11) allows multi-national Unicode characters to be embedded portably within C source text by using \uXXXX or \UXXXXXXXX encoding (where X denotes a hexadecimal character), although this feature is not yet widely implemented.
The basic C execution character set contains the same characters, along with representations for alert, backspace, and carriage return. Run-time support for extended character sets has increased with each revision of the C standard.
Reserved words
The following reserved words are case sensitive.
C89 has 32 reserved words, also known as 'keywords', which cannot be used for any purposes other than those for which they are predefined:
auto
break
case
char
const
continue
default
do
double
else
enum
extern
float
for
goto
if
int
long
register
return
short
signed
sizeof
static
struct
switch
typedef
union
unsigned
void
volatile
while
C99 added five more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword)
inline
restrict
_Bool ‡
_Complex
_Imaginary
C11 added seven more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword)
_Alignas ‡
_Alignof ‡
_Atomic
_Generic
_Noreturn
_Static_assert ‡
_Thread_local ‡
C23 reserved fifteen more words:
alignas
alignof
bool
constexpr
false
nullptr
static_assert
thread_local
true
typeof
typeof_unqual
_BitInt
_Decimal32
_Decimal64
_Decimal128
Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed.
Prior to C89, entry was reserved as a keyword. In the second edition of their book The C Programming Language, which describes what became known as C89, Kernighan and Ritchie wrote, "The ... [keyword] entry, formerly reserved but never used, is no longer reserved." and "The stillborn entry keyword is withdrawn."
Operators
C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for:
arithmetic: +, -, *, /, %
assignment: =
augmented assignment:
bitwise logic: ~, &, |, ^
bitwise shifts: <<, >>
Boolean logic: !, &&, ||
conditional evaluation: ? :
equality testing: ==, !=
calling functions: ( )
increment and decrement: ++, --
member selection: ., ->
object size: sizeof
type: typeof, typeof_unqual since C23
order relations: <, <=, >, >=
reference and dereference: &, *, [ ]
sequencing: ,
subexpression grouping: ( )
type conversion: (typename)
C uses the operator = (used in mathematics to express equality) to indicate assignment, following the precedent of Fortran and PL/I, but unlike ALGOL and its derivatives. C uses the operator == to test for equality. The similarity between the operators for assignment and equality may result in the accidental use of one in place of the other, and in many cases the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expression if (a == b + 1) might mistakenly be written as if (a = b + 1), which will be evaluated as true unless the value of a is 0 after the assignment.
The C operator precedence is not always intuitive. For example, the operator == binds more tightly than (is executed prior to) the operators & (bitwise AND) and | (bitwise OR) in expressions such as x & 1 == 0, which must be written as (x & 1) == 0 if that is the coder's intent.
"Hello, world" example
The "hello, world" example that appeared in the first edition of K&R has become the model for an introductory program in most programming textbooks. The program prints "hello, world" to the standard output, which is usually a terminal or screen display.
The original version was:
main()
{
printf("hello, world\n");
}
A standard-conforming "hello, world" program is:
#include <stdio.h>
int main(void)
{
printf("hello, world\n");
}
The first line of the program contains a preprocessing directive, indicated by #include. This causes the compiler to replace that line of code with the entire text of the stdio.h header file, which contains declarations for standard input and output functions such as printf and scanf. The angle brackets surrounding stdio.h indicate that the header file can be located using a search strategy that prefers headers provided with the compiler to other headers having the same name (as opposed to double quotes which typically include local or project-specific header files).
The second line indicates that a function named main is being defined. The main function serves a special purpose in C programs; the run-time environment calls the main function to begin program execution. The type specifier int indicates that the value returned to the invoker (in this case the run-time environment) as a result of evaluating the main function, is an integer. The keyword void as a parameter list indicates that the main function takes no arguments.
The opening curly brace indicates the beginning of the code that defines the main function.
The next line of the program is a statement that calls (i.e. diverts execution to) a function named printf, which in this case is supplied from a system library. In this call, the printf function is passed (i.e. provided with) a single argument, which is the address of the first character in the string literal "hello, world\n". The string literal is an unnamed array set up automatically by the compiler, with elements of type char and a final NULL character (ASCII value 0) marking the end of the array (to allow printf to determine the length of the string). The NULL character can also be written as the escape sequence \0. The \n is a standard escape sequence that C translates to a newline character, which, on output, signifies the end of the current line. The return value of the printf function is of type int, but it is silently discarded since it is not used. (A more careful program might test the return value to check that the printf function succeeded.) The semicolon ; terminates the statement.
The closing curly brace indicates the end of the code for the main function. According to the C99 specification and newer, the main function (unlike any other function) will implicitly return a value of 0 upon reaching the } that terminates the function. The return value of 0 is interpreted by the run-time system as an exit code indicating successful execution of the function.
Data types
The type system in C is static and weakly typed, which makes it similar to the type system of ALGOL descendants such as Pascal. There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, and enumerated types (enum). Integer type char is often used for single-byte characters. C99 added a Boolean data type. There are also derived types including arrays, pointers, records (struct), and unions (union).
C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a type cast to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.
Some find C's declaration syntax unintuitive, particularly for function pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)
C's usual arithmetic conversions allow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.
Pointers
C supports the use of pointers, a type of reference that records the address or location of an object or function in memory. Pointers can be dereferenced to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment or pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type.
Pointers are used for many purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation is performed using pointers; the result of a malloc is usually cast to the data type of the data to be stored. Many data types, such as trees, are commonly implemented as dynamically allocated struct objects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays of struct objects. Pointers to functions (function pointers) are useful for passing functions as arguments to higher-order functions (such as qsort or bsearch), in dispatch tables, or as callbacks to event handlers.
A null pointer value explicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in a segmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of a linked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, a null pointer constant can be written as 0, with or without explicit casting to a pointer type, as the NULL macro defined by several standard headers or, since C23 with the constant nullptr. In conditional contexts, null pointer values evaluate to false, while all other pointer values evaluate to true.
Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.
Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalid pointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictive reference types.
Arrays
Array types in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library's malloc function, and treat it as an array.
Since arrays are always accessed (in effect) via pointers, array accesses are typically not checked against the underlying array size, although some compilers may provide bounds checking as an option. Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data, buffer overruns, and run-time exceptions.
C does not have a special provision for declaring multi-dimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing in row-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from applied linear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional "row vector" of pointers to the columns.) C99 introduced "variable-length arrays" which address this issue.
The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers):
int func(int N, int M)
{
float (*p)[N] [M] = malloc(sizeof *p);
if (p == 0)
return -1;
for (int i = 0; i < N; i++)
for (int j = 0; j < M; j++)
(*p)[i] [j] = i + j;
print_array(N, M, p);
free(p);
return 1;
}
And here is a similar implementation using C99's Auto VLA feature:
int func(int N, int M)
{
// Caution: checks should be made to ensure N*M*sizeof(float) does NOT exceed limitations for auto VLAs and is within available size of stack.
float p[N] [M]; // auto VLA is held on the stack, and sized when the function is invoked
for (int i = 0; i < N; i++)
for (int j = 0; j < M; j++)
p[i] [j] = i + j;
print_array(N, M, p);
// no need to free(p) since it will disappear when the function exits, along with the rest of the stack frame
return 1;
}
Array–pointer interchangeability
The subscript notation x[i] (where x designates a pointer) is syntactic sugar for *(x+i). Taking advantage of the compiler's knowledge of the pointer type, the address that x + i points to is not the base address (pointed to by x) incremented by i bytes, but rather is defined to be the base address incremented by i multiplied by the size of an element that x points to. Thus, x[i] designates the i+1th element of the array.
Furthermore, in most expression contexts (a notable exception is as operand of sizeof), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C use pass-by-value semantics, arrays are in effect passed by reference.
The total size of an array x can be determined by applying sizeof to an expression of array type. The size of an element can be determined by applying the operator sizeof to any dereferenced element of an array A, as in n = sizeof A[0]. Thus, the number of elements in a declared array A can be determined as sizeof A / sizeof A[0]. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost.
Memory management
One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three principal ways to allocate memory for objects:
Static memory allocation: space for the object is provided in the binary at compile-time; these objects have an extent (or lifetime) as long as the binary which contains them is loaded into memory.
Automatic memory allocation: temporary objects can be stored on the stack, and this space is automatically freed and reusable after the block in which they are declared is exited.
Dynamic memory allocation: blocks of memory of arbitrary size can be requested at run-time using library functions such as malloc from a region of memory called the heap; these blocks persist until subsequently freed for reuse by calling the library function realloc or free.
These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.
Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary. Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article on C dynamic memory allocation for an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by the linker or loader, before the program can even begin execution.)
Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whatever bit pattern happens to be present in the storage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but both false positives and false negatives can occur.
Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as a memory leak. Conversely, it is possible for memory to be freed, but is referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages with automatic garbage collection.
Libraries
The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. For a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., -lm, shorthand for "link the math library").
The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation (implementations which target limited environments such as embedded systems may provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example, stdio.h) specify the interfaces for these and other standard library facilities.
Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification.
Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python.
File handling and streams
File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g. stdio.h). File handling is generally implemented through high-level I/O which works through streams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, a buffer (a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example a hard drive or solid-state drive. Low-level I/O functions are not part of the standard C library but are generally part of "bare metal" programming (programming that is independent of any operating system such as most embedded programming). With few exceptions, implementations include low-level I/O.
Language tools
A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler.
Automated source code checking and auditing tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors. MISRA C is a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.
There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such as bounds checking for arrays, detection of buffer overflow, serialization, dynamic memory tracking, and automatic garbage collection.
Memory management checking tools like Purify or Valgrind and linking with libraries containing special versions of the memory allocation functions can help uncover runtime errors in memory usage.
Uses
Rationale for use in systems programming
C is widely used for systems programming in implementing operating systems and embedded system applications. This is for several reasons:
The C language permits platform hardware and memory to be accessed with pointers and type punning, so system-specific features (e.g. Control/Status Registers, I/O registers) can be configured and used with code written in C – it allows fullest control of the platform it is running on.
The code generated after compilation does not demand many system features, and can be invoked from some boot code in a straightforward manner – it is simple to execute.
The C language statements and expressions typically map well on to sequences of instructions for the target processor, and consequently there is a low run-time demand on system resources – it is fast to execute.
With its rich set of operators, the C language can use many of the features of target CPUs. Where a particular CPU has more esoteric instructions, a language variant can be constructed with perhaps intrinsic functions to exploit those instructions – it can use practically all the target CPU's features.
The language makes it easy to overlay structures onto blocks of binary data, allowing the data to be comprehended, navigated and modified – it can write data structures, even file systems.
The language supports a rich set of operators, including bit manipulation, for integer arithmetic and logic, and perhaps different sizes of floating point numbers – it can process appropriately-structured data effectively.
C is a fairly small language, with only a handful of statements, and without too many features that generate extensive target code – it is comprehensible.
C has direct control over memory allocation and deallocation, which gives reasonable efficiency and predictable timing to memory-handling operations, without any concerns for sporadic stop-the-world garbage collection events – it has predictable performance.
C permits the use and implementation of different memory allocation schemes, including a typical and ; a more sophisticated mechanism with arenas; or a version for an OS kernel that may suit DMA, use within interrupt handlers, or integrated with the virtual memory system.
Depending on the linker and environment, C code can also call libraries written in assembly language, and may be called from assembly language – it interoperates well with other lower-level code.
C and its calling conventions and linker structures are commonly used in conjunction with other high-level languages, with calls both to C and from C supported – it interoperates well with other high-level code.
C has a very mature and broad ecosystem, including libraries, frameworks, open source compilers, debuggers and utilities, and is the de facto standard. It is likely the drivers already exist in C, or that there is a similar CPU architecture as a back-end of a C compiler, so there is reduced incentive to choose another language.
Used for computationally-intensive libraries
C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, the GNU Multiple Precision Arithmetic Library, the GNU Scientific Library, Mathematica, and MATLAB are completely or partially written in C. Many languages support calling library functions in C, for example, the Python-based framework NumPy uses C for the high-performance and hardware-interacting aspects.
C as an intermediate language
C is sometimes used as an intermediate language by implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of other C-based languages specifically designed for use as intermediate languages, such as C--. Also, contemporary major compilers GCC and LLVM both feature an intermediate representation that is not C, and those compilers support front ends for many languages including C.
Other languages written in C
A consequence of C's wide availability and efficiency is that compilers, libraries and interpreters of other programming languages are often implemented in C. For example, the reference implementations of Python, Perl, Ruby, and PHP are written in C.
Once used for web development
Historically, C was sometimes used for web development using the Common Gateway Interface (CGI) as a "gateway" for information between the web application, the server, and the browser. C may have been chosen over interpreted languages because of its speed, stability, and near-universal availability. It is no longer common practice for web development to be done in C, and many other web development languages are popular. Applications where C-based web development continues include the HTTP configuration pages on routers, IoT devices and similar, although even here some projects have parts in higher-level languages e.g. the use of Lua within OpenWRT.
Web servers
The two most popular web servers, Apache HTTP Server and Nginx, are both written in C. These web servers interact with the operating system, listen on TCP ports for HTTP requests, and then serve up static web content, or cause the execution of other languages handling to 'render' content such as PHP, which is itself primarily written in C. C's close-to-the-metal approach allows for the construction of these high-performance software systems.
End-user applications
C has also been widely used to implement end-user applications. However, such applications can also be written in newer, higher-level languages.
Limitations
While C has been popular, influential and hugely successful, it has drawbacks, including:
The standard dynamic memory handling with malloc and free is error prone. Improper use can lead to memory leaks and dangling pointers.
The use of pointers and the direct manipulation of memory means corruption of memory is possible, perhaps due to programmer error, or insufficient checking of bad data.
There is some type checking, but it does not apply to areas like variadic functions, and the type checking can be trivially or inadvertently circumvented. It is weakly typed.
Since the code generated by the compiler contains few checks itself, there is a burden on the programmer to consider all possible outcomes, to protect against buffer overruns, array bounds checking, stack overflows, memory exhaustion, and consider race conditions, thread isolation, etc.
The use of pointers and the run-time manipulation of these means there may be two ways to access the same data (aliasing), which is not determinable at compile time. This means that some optimisations that may be available to other languages are not possible in C. FORTRAN is considered faster.
Some of the standard library functions, e.g. scanf or , can lead to buffer overruns.
There is limited standardisation in support for low-level variants in generated code, for example: different function calling conventions and ABI; different structure packing conventions; different byte ordering within larger integers (including endianness). In many language implementations, some of these options may be handled with the preprocessor directive #pragma, and some with additional keywords e.g. use __cdecl calling convention. The directive and options are not consistently supported.
String handling using the standard library is code-intensive, with explicit memory management required.
The language does not directly support object orientation, introspection, run-time expression evaluation, generics, etc.
There are few guards against inappropriate use of language features, which may lead to unmaintainable code. In particular, the C preprocessor can hide troubling effects such as double evaluation and worse. This facility for tricky code has been celebrated with competitions such as the International Obfuscated C Code Contest and the Underhanded C Contest.
C lacks standard support for exception handling and only offers return codes for error checking. The setjmp and longjmp standard library functions have been used to implement a try-catch mechanism via macros.
For some purposes, restricted styles of C have been adopted, e.g. MISRA C or CERT C, in an attempt to reduce the opportunity for bugs. Databases such as CWE attempt to count the ways C etc. has vulnerabilities, along with recommendations for mitigation.
There are tools that can mitigate against some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs.
Related languages
C has both directly and indirectly influenced many later languages such as C++ and Java. The most pervasive influence has been syntactical; all of the languages mentioned combine the statement and (more or less recognizably) expression syntax of C with type systems, data models or large-scale program structures that differ from those of C, sometimes radically.
Several C or near-C interpreters exist, including Ch and CINT, which can also be used for scripting.
When object-oriented programming languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as source-to-source compilers; source code was translated into C, and then compiled with a C compiler.
The C++ programming language (originally named "C with Classes") was devised by Bjarne Stroustrup as an approach to providing object-oriented functionality with a C-like syntax. C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permits generic programming via templates. Nearly a superset of C, C++ now supports most of C, with a few exceptions.
Objective-C was originally a very "thin" layer on top of C, and remains a strict superset of C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.
In addition to C++ and Objective-C, Ch, Cilk, and Unified Parallel C are nearly supersets of C.
| Technology | Programming | null |
6034 | https://en.wikipedia.org/wiki/Cahn%E2%80%93Ingold%E2%80%93Prelog%20priority%20rules | Cahn–Ingold–Prelog priority rules | In organic chemistry, the Cahn–Ingold–Prelog (CIP) sequence rules (also the CIP priority convention; named after Robert Sidney Cahn, Christopher Kelk Ingold, and Vladimir Prelog) are a standard process to completely and unequivocally name a stereoisomer of a molecule. The purpose of the CIP system is to assign an R or S descriptor to each stereocenter and an E or Z descriptor to each double bond so that the configuration of the entire molecule can be specified uniquely by including the descriptors in its systematic name. A molecule may contain any number of stereocenters and any number of double bonds, and each usually gives rise to two possible isomers. A molecule with an integer describing the number of stereocenters will usually have stereoisomers, and diastereomers each having an associated pair of enantiomers. The CIP sequence rules contribute to the precise naming of every stereoisomer of every organic molecule with all atoms of ligancy of fewer than 4 (but including ligancy of 6 as well, this term referring to the "number of neighboring atoms" bonded to a center).
The key article setting out the CIP sequence rules was published in 1966, and was followed by further refinements, before it was incorporated into the rules of the International Union of Pure and Applied Chemistry (IUPAC), the official body that defines organic nomenclature, in 1974. The rules have since been revised, most recently in 2013, as part of the IUPAC book Nomenclature of Organic Chemistry. The IUPAC presentation of the rules constitute the official, formal standard for their use, and it notes that "the method has been developed to cover all compounds with ligancy up to 4... and… [extended to the case of] ligancy 6… [as well as] for all configurations and conformations of such compounds." Nevertheless, though the IUPAC documentation presents a thorough introduction, it includes the caution that "it is essential to study the original papers, especially the 1966 paper, before using the sequence rule for other than fairly simple cases."
A recent paper argues for changes to some of the rules (sequence rules 1b and 2) to address certain molecules for which the correct descriptors were unclear. However, a different problem remains: in rare cases, two different stereoisomers of the same molecule can have the same CIP descriptors, so the CIP system may not be able to unambiguously name a stereoisomer, and other systems may be preferable.
Steps for naming
The steps for naming molecules using the CIP system are often presented as:
Identification of stereocenters and double bonds;
Assignment of priorities to the groups attached to each stereocenter or double-bonded atom; and
Assignment of R/S and E/Z descriptors.
Assignment of priorities
R/S and E/Z descriptors are assigned by using a system for ranking priority of the groups attached to each stereocenter. This procedure, often known as the sequence rules, is the heart of the CIP system. The overview in this section omits some rules that are needed only in rare cases.
Compare the atomic number (Z) of the atoms directly attached to the stereocenter; the group having the atom of higher atomic number Z receives higher priority (i.e. number 1).
If there is a tie, the atoms at distance 2 from the stereocenter have to be considered: a list is made for each group of further atoms bonded to the one directly attached to the stereocenter. Each list is arranged in order of decreasing atomic number Z. Then the lists are compared atom by atom; at the earliest difference, the group containing the atom of higher atomic number Z receives higher priority.
If there is still a tie, each atom in each of the two lists is replaced with a sublist of the other atoms bonded to it (at distance 3 from the stereocenter), the sublists are arranged in decreasing order of atomic number Z, and the entire structure is again compared atom by atom. This process is repeated recursively, each time with atoms one bond farther from the stereocenter, until the tie is broken.
Isotopes
If two groups differ only in isotopes, then the larger atomic mass is used to set the priority.
Double and triple bonds
If an atom, A, is double-bonded to another atom, then atom A should be treated as though it is "connected to the same atom twice". An atom that is double-bonded has a higher priority than an atom that is single bonded. When dealing with double bonded priority groups, one is allowed to visit the same atom twice as one creates an arc.
When B is replaced with a list of attached atoms, A itself, but not its "phantom", is excluded in accordance with the general principle of not doubling back along a bond that has just been followed. A triple bond is handled the same way except that A and B are each connected to two phantom atoms of the other.
Geometrical isomers
If two substituents on an atom are geometric isomers of each other, the Z-isomer has higher priority than the E-isomer. A stereoisomer that contains two higher priority groups on the same face of the double bond (cis) is classified as "Z." The stereoisomer with two higher priority groups on opposite sides of a carbon-carbon double bond (trans) is classified as "E."
Cyclic molecules
To handle a molecule containing one or more cycles, one must first expand it into a tree (called a hierarchical digraph) by traversing bonds in all possible paths starting at the stereocenter. When the traversal encounters an atom through which the current path has already passed, a phantom atom is generated in order to keep the tree finite. A single atom of the original molecule may appear in many places (some as phantoms, some not) in the tree.
Assigning descriptors
Stereocenters: R/S
A chiral sp3 hybridized isomer contains four different substituents. All four substituents are assigned prorites based on its atomic numbers. After the substituents of a stereocenter have been assigned their priorities, the molecule is oriented in space so that the group with the lowest priority is pointed away from the observer. If the substituents are numbered from 1 (highest priority) to 4 (lowest priority), then the sense of rotation of a curve passing through 1, 2 and 3 distinguishes the stereoisomers. In a configurational isomer, the lowest priority group (most times hydrogen) is positioned behind the plane or the hatched bond going away from the reader. The highest priority group will have an arc drawn connecting to the rest of the groups, finishing at the group of third priority. An arc drawn clockwise, has the rectus (R) assignment. An arc drawn counterclockwise, has the sinister (S) assignment. The names are derived from the Latin for 'right' and 'left', respectively. When naming an organic isomer, the abbreviation for either rectus or sinister assignment is placed in front of the name in parentheses. For example, 3-methyl-1-pentene with a rectus assignment is formatted as (R)-3-methyl-1-pentene.
A practical method of determining whether an enantiomer is R or S is by using the right-hand rule: one wraps the molecule with the fingers in the direction . If the thumb points in the direction of the fourth substituent, the enantiomer is R; otherwise, it is S.
It is possible in rare cases that two substituents on an atom differ only in their absolute configuration (R or S). If the relative priorities of these substituents need to be established, R takes priority over S. When this happens, the descriptor of the stereocenter is a lowercase letter (r or s) instead of the uppercase letter normally used.
Double bonds: E/Z
For double bonded molecules, Cahn–Ingold–Prelog priority rules (CIP rules) are followed to determine the priority of substituents of the double bond. If both of the high priority groups are on the same side of the double bond (cis configuration), then the stereoisomer is assigned the configuration Z (zusammen, German word meaning "together"). If the high priority groups are on opposite sides of the double bond (trans configuration), then the stereoisomer is assigned the configuration E (entgegen, German word meaning "opposed")
Coordination compounds
In some cases where stereogenic centers are formed, the configuration must be specified. Without the presence of a non-covalent interaction, a compound is achiral. Some professionals have proposed a new rule to account for this. This rule states that "non-covalent interactions have a fictitious number between 0 and 1" when assigning priority. Compounds in which this occurs are referred to as coordination compounds.
Spiro compounds
Some spiro compounds, for example the SDP ligands ((R)- and (S)-7,7'-bis(diphenylphosphaneyl)-2,2',3,3'-tetrahydro-1,1'-spirobi[indene]), represent chiral, C2-symmetrical molecules where the rings lie approximately at right angles to each other and each molecule cannot be superposed on its mirror image. The spiro carbon, C, is a stereogenic centre, and priority can be assigned a>a′>b>b′, in which one ring (both give the same answer) contains atoms a and b adjacent to the spiro carbon, and the other contains a′ and b′. The configuration at C may then be assigned as for any other stereocentre.
Examples
The following are examples of application of the nomenclature.
{| align="center" class="wikitable" width=800px
|-
!colspan="2"|R/S assignments for several compounds
|-
|bgcolor="#FFFFFF"|
|bgcolor="#FFFFFF" valign=top| The hypothetical molecule bromochlorofluoroiodomethane shown in its (R)-configuration would be a very simple chiral compound. The priorities are assigned based on atomic number (Z): iodine (Z = 53) > bromine (Z = 35) > chlorine (Z = 17) > fluorine (Z = 9). Allowing fluorine (lowest priority, number 4) to point away from the viewer the rotation is clockwise hence the R assignment.
|-
|bgcolor="#FFFFFF"|
| bgcolor="#FFFFFF" valign="top" | In the assignment of L-serine highest priority (i.e. number 1) is given to the nitrogen atom (Z = 7) in the amino group (NH2). Both the hydroxymethyl group (CH2OH) and the carboxylic acid group (COOH) have carbon atoms (Z = 6) but priority is given to the latter because the carbon atom in the COOH group is connected to a second oxygen (Z = 8) whereas in the CH2OH group carbon is connected to a hydrogen atom (Z = 1). Lowest priority (i.e. number 4) is given to the hydrogen atom and as this atom points away from the viewer, the counterclockwise decrease in priority over the three remaining substituents completes the assignment as S.
|-
|bgcolor="#FFFFFF"|
|bgcolor="#FFFFFF" valign=top| The stereocenter in (S)-carvone is connected to one hydrogen atom (not shown, priority 4) and three carbon atoms. The isopropenyl group has priority 1 (carbon atoms only), and for the two remaining carbon atoms, priority is decided with the carbon atoms two bonds removed from the stereocenter, one part of the keto group (O, O, C, priority number 2) and one part of an alkene (C, C, H, priority number 3). The resulting counterclockwise rotation results in S.
|}
Describing multiple centers
If a compound has more than one chiral stereocenter, each center is denoted by either R or S. For example, ephedrine exists in (1R,2S) and (1S,2R) stereoisomers, which are distinct mirror-image forms of each other, making them enantiomers. This compound also exists as the two enantiomers written (1R,2R) and (1S,2S), which are named pseudoephedrine rather than ephedrine. All four of these isomers are named 2-methylamino-1-phenyl-1-propanol in systematic nomenclature. However, ephedrine and pseudoephedrine are diastereomers, or stereoisomers that are not enantiomers because they are not related as mirror-image copies. Pseudoephedrine and ephedrine are given different names because, as diastereomers, they have different chemical properties, even for racemic mixtures of each.
More generally, for any pair of enantiomers, all of the descriptors are opposite: (R,R) and (S,S) are enantiomers, as are (R,S) and (S,R). Diastereomers have at least one descriptor in common; for example (R,S) and (R,R) are diastereomers, as are (S,R) and (S,S). This holds true also for compounds having more than two stereocenters: if two stereoisomers have at least one descriptor in common, they are diastereomers. If all the descriptors are opposite, they are enantiomers.
A meso compound is an achiral molecule, despite having two or more stereogenic centers. A meso compound is superposable on its mirror image, therefore it reduces the number of stereoisomers predicted by the 2n rule. This occurs because the molecule obtains a plane of symmetry that causes the molecule to rotate around the central carbon–carbon bond. One example is meso-tartaric acid, in which (R,S) is the same as the (S,R) form. In meso compounds the R and S stereocenters occur in symmetrically positioned pairs.
Relative configuration
The relative configuration of two stereoisomers may be denoted by the descriptors R and S with an asterisk (*). (R*,R*) means two centers having identical configurations, (R,R) or (S,S); (R*,S*) means two centers having opposite configurations, (R,S) or (S,R). To begin, the lowest-numbered (according to IUPAC systematic numbering) stereogenic center is given the R* descriptor.
To designate two anomers the relative stereodescriptors alpha (α) and beta (β) are used. In the α anomer the anomeric carbon atom and the reference atom do have opposite configurations (R,S) or (S,R), whereas in the β anomer they are the same (R,R) or (S,S).
Faces
Stereochemistry also plays a role assigning faces to trigonal molecules such as ketones. A nucleophile in a nucleophilic addition can approach the carbonyl group from two opposite sides or faces. When an achiral nucleophile attacks acetone, both faces are identical and there is only one reaction product. When the nucleophile attacks butanone, the faces are not identical (enantiotopic) and a racemic product results. When the nucleophile is a chiral molecule diastereoisomers are formed. When one face of a molecule is shielded by substituents or geometric constraints compared to the other face the faces are called diastereotopic. The same rules that determine the stereochemistry of a stereocenter (R or S) also apply when assigning the face of a molecular group. The faces are then called the Re-face and Si-face. In the example displayed on the right, the compound acetophenone is viewed from the Re-face. Hydride addition as in a reduction process from this side will form the (S)-enantiomer and attack from the opposite Si-face will give the (R)-enantiomer. However, one should note that adding a chemical group to the prochiral center from the Re-face will not always lead to an (S)-stereocenter, as the priority of the chemical group has to be taken into account. That is, the absolute stereochemistry of the product is determined on its own and not by considering which face it was attacked from. In the above-mentioned example, if chloride (Z = 17) were added to the prochiral center from the Re-face, this would result in an (R)-enantiomer.
| Physical sciences | Stereochemistry | Chemistry |
6038 | https://en.wikipedia.org/wiki/Chemical%20engineering | Chemical engineering | Chemical engineering is an engineering field which deals with the study of the operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions.
Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents.
Etymology
A 1996 article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same paper, however, George E. Davis, an English consultant, was credited with having coined the term. Davis also tried to found a Society of Chemical Engineering, but instead, it was named the Society of Chemical Industry (1881), with Davis as its first secretary. The History of Science in United States: An Encyclopedia puts the use of the term around 1890. "Chemical engineering", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, "chemical engineer," was already in common use in Britain and the United States.
History
New concepts and innovations
In the 1940s, it became clear that unit operations alone were insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain and the United States continued until the 1960s, transport phenomena started to receive greater focus. Along with other novel concepts, such as process systems engineering (PSE), a "second paradigm" was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its synthetic elements, such as those of a control system and process design. Developments in chemical engineering before and after World War II were mainly incited by the petrochemical industry; however, advances in other fields were made as well. Advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. Meanwhile, progress in polymer science in the 1950s paved way for the "age of plastics".
Safety and hazard developments
Concerns regarding large-scale chemical manufacturing facilities' safety and environmental impact were also raised during this period. Silent Spring, published in 1962, alerted its readers to the harmful effects of DDT, a potent insecticide. The 1974 Flixborough disaster in the United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 Bhopal disaster in India resulted in almost 4,000 deaths. These incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. In response, the IChemE required safety to be part of every degree course that it accredited after 1982. By the 1970s, legislation and monitoring agencies were instituted in various countries, such as France, Germany, and the United States. In time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety.
Recent progress
Advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. The completion of the Human Genome Project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. Chemical engineering principles were used to produce DNA sequences in large quantities.
Concepts
Chemical engineering involves the application of several principles. Key concepts are presented below.
Plant design and construction
Chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. Design engineers often work in a consulting role, designing plants to meet clients' needs. Design is limited by several factors, including funding, government regulations, and safety standards. These constraints dictate a plant's choice of process, materials, and equipment.
Plant construction is coordinated by project engineers and project managers, depending on the size of the investment. A chemical engineer may do the job of project engineer full-time or part of the time, which requires additional training and job skills or act as a consultant to the project group. In the USA the education of chemical engineering graduates from the Baccalaureate programs accredited by ABET do not usually stress project engineering education, which can be obtained by specialized training, as electives, or from graduate programs. Project engineering jobs are some of the largest employers for chemical engineers.
Process design and analysis
A unit operation is a physical step in an individual chemical engineering process. Unit operations (such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying and separating its products, recycling unspent reactants, and controlling energy transfer in reactors. On the other hand, a unit process is the chemical equivalent of a unit operation. Along with unit operations, unit processes constitute a process operation. Unit processes (such as nitration, hydrogenation, and oxidation involve the conversion of materials by biochemical, thermochemical and other means. Chemical engineers responsible for these are called process engineers.
Process design requires the definition of equipment types and sizes as well as how they are connected and the materials of construction. Details are often printed on a Process Flow Diagram which is used to control the capacity and reliability of a new or existing chemical factory.
Education for chemical engineers in the first college degree 3 or 4 years of study stresses the principles and practices of process design. The same skills are used in existing chemical plants to evaluate the efficiency and make recommendations for improvements.
Transport phenomena
Modeling and analysis of transport phenomena is essential for many industrial applications. Transport phenomena involve fluid dynamics, heat transfer and mass transfer, which are governed mainly by momentum transfer, energy transfer and transport of chemical species, respectively. Models often involve separate considerations for macroscopic, microscopic and molecular level phenomena. Modeling of transport phenomena, therefore, requires an understanding of applied mathematics.
Applications and practice
Chemical engineers develop economic ways of using materials and energy. Chemical engineers use chemistry and engineering to turn raw materials into usable products, such as medicine, petrochemicals, and plastics on a large-scale, industrial setting. They are also involved in waste management and research. Both applied and research facets could make extensive use of computers.
Chemical engineers may be involved in industry or university research where they are tasked with designing and performing experiments, by scaling up theoretical chemical reactions, to create better and safer methods for production, pollution control, and resource conservation. They may be involved in designing and constructing plants as a project engineer. Chemical engineers serving as project engineers use their knowledge in selecting optimal production methods and plant equipment to minimize costs and maximize safety and profitability. After plant construction, chemical engineering project managers may be involved in equipment upgrades, troubleshooting, and daily operations in either full-time or consulting roles.
| Technology | Disciplines | null |
6042 | https://en.wikipedia.org/wiki/Compact%20space | Compact space | In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., it includes all limiting values of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers is not compact either, because it excludes the two limiting values and . However, the extended real number line would be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces.
One such generalization is that a topological space is sequentially compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval , some of those points will get arbitrarily close to some real number in that space.
For instance, some of the numbers in the sequence accumulate to 0 (while others accumulate to 1).
Since neither 0 nor 1 are members of the open unit interval , those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering (the real number line), the sequence of points has no subsequence that converges to any real number.
Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that "cover" the space, in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally – that is, in a neighborhood of each point – into corresponding statements that hold throughout the space, and many theorems are of this character.
The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space.
Historical development
In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point.
Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected.
The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.
In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points.
The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà.
The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt.
For a certain class of Green's functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence – or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space.
It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term compactness to refer to this general phenomenon (he used the term already in his 1904 paper which led to the famous 1906 thesis).
However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis.
In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it.
The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers.
This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by , who also exploited it in the development of the integral now bearing his name. Ultimately, the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space.
Basic examples
Any finite space is compact; a finite subcover can be obtained by selecting, for each point, an open set containing it. A nontrivial example of a compact space is the (closed) unit interval of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point among these points in that interval. For instance, the odd-numbered terms of the sequence get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself — an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval , one could choose the sequence of points , of which no sub-sequence ultimately gets arbitrarily close to any given real number.
In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary – without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can still tend to the missing point, thereby not getting arbitrarily close to any point within the space. Lines and planes are not compact, since one can take a set of equally-spaced points in any given direction without approaching any point.
Definitions
Various definitions of compactness may apply, depending on the level of generality.
A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces.
In contrast, the different notions of compactness are not equivalent in general topological spaces, and the most useful notion of compactness – originally called bicompactness – is defined using covers consisting of open sets (see Open cover definition below).
That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally – in a neighbourhood of each point of the space – and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous; here, continuity is a local property of the function, and uniform continuity the corresponding global property.
Open cover definition
Formally, a topological space is called compact if every open cover of has a finite subcover. That is, is compact if for every collection of open subsets of such that
there is a finite subcollection ⊆ such that
Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta.
Compactness of subsets
A subset of a topological space is said to be compact if it is compact as a subspace (in the subspace topology). That is, is compact if for every arbitrary collection of open subsets of such that
there is a finite subcollection ⊆ such that
Because compactness is a topological property, the compactness of a subset depends only on the subspace topology induced on it. It follows that, if , with subset equipped with the subspace topology, then is compact in if and only if is compact in .
Characterization
If is a topological space then the following are equivalent:
is compact; i.e., every open cover of has a finite subcover.
has a sub-base such that every cover of the space, by members of the sub-base, has a finite subcover (Alexander's sub-base theorem).
is Lindelöf and countably compact.
Any collection of closed subsets of with the finite intersection property has nonempty intersection.
Every net on has a convergent subnet (see the article on nets for a proof).
Every filter on has a convergent refinement.
Every net on has a cluster point.
Every filter on has a cluster point.
Every ultrafilter on converges to at least one point.
Every infinite subset of has a complete accumulation point.
For every topological space , the projection is a closed mapping (see proper map).
Every open cover linearly ordered by subset inclusion contains .
Bourbaki defines a compact space (quasi-compact space) as a topological space where each filter has a cluster point (i.e., 8. in the above).
Euclidean space
For any subset of Euclidean space, is compact if and only if it is closed and bounded; this is the Heine–Borel theorem.
As a Euclidean space is a metric space, the conditions in the next subsection also apply to all of its subsets. Of all of the equivalent conditions, it is in practice easiest to verify that a subset is closed and bounded, for example, for a closed interval or closed -ball.
Metric spaces
For any metric space , the following are equivalent (assuming countable choice):
is compact.
is complete and totally bounded (this is also equivalent to compactness for uniform spaces).
is sequentially compact; that is, every sequence in has a convergent subsequence whose limit is in (this is also equivalent to compactness for first-countable uniform spaces).
is limit point compact (also called weakly countably compact); that is, every infinite subset of has at least one limit point in .
is countably compact; that is, every countable open cover of has a finite subcover.
is an image of a continuous function from the Cantor set.
Every decreasing nested sequence of nonempty closed subsets in has a nonempty intersection.
Every increasing nested sequence of proper open subsets in fails to cover .
A compact metric space also satisfies the following properties:
Lebesgue's number lemma: For every open cover of , there exists a number such that every subset of of diameter < is contained in some member of the cover.
is second-countable, separable and Lindelöf – these three conditions are equivalent for metric spaces. The converse is not true; e.g., a countable discrete space satisfies these three conditions, but is not compact.
is closed and bounded (as a subset of any metric space whose restricted metric is ). The converse may fail for a non-Euclidean space; e.g. the real line equipped with the discrete metric is closed and bounded but not compact, as the collection of all singletons of the space is an open cover which admits no finite subcover. It is complete but not totally bounded.
Ordered spaces
For an ordered space (i.e. a totally ordered set equipped with the order topology), the following are equivalent:
is compact.
Every subset of has a supremum (i.e. a least upper bound) in .
Every subset of has an infimum (i.e. a greatest lower bound) in .
Every nonempty closed subset of has a maximum and a minimum element.
An ordered space satisfying (any one of) these conditions is called a complete lattice.
In addition, the following are equivalent for all ordered spaces , and (assuming countable choice) are true whenever is compact. (The converse in general fails if is not also metrizable.):
Every sequence in has a subsequence that converges in .
Every monotone increasing sequence in converges to a unique limit in .
Every monotone decreasing sequence in converges to a unique limit in .
Every decreasing nested sequence of nonempty closed subsets 1 ⊇ 2 ⊇ ... in has a nonempty intersection.
Every increasing nested sequence of proper open subsets 1 ⊆ 2 ⊆ ... in fails to cover .
Characterization by continuous functions
Let be a topological space and the ring of real continuous functions on .
For each , the evaluation map
given by is a ring homomorphism.
The kernel of is a maximal ideal, since the residue field is the field of real numbers, by the first isomorphism theorem. A topological space is pseudocompact if and only if every maximal ideal in has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism. There are pseudocompact spaces that are not compact, though.
In general, for non-pseudocompact spaces there are always maximal ideals in such that the residue field is a (non-Archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness: a topological space is compact if and only if every point of the natural extension is infinitely close to a point of (more precisely, is contained in the monad of ).
Hyperreal definition
A space is compact if its hyperreal extension (constructed, for example, by the ultrapower construction) has the property that every point of is infinitely close to some point of . For example, an open real interval is not compact because its hyperreal extension contains infinitesimals, which are infinitely close to 0, which is not a point of .
Sufficient conditions
A closed subset of a compact space is compact.
A finite union of compact sets is compact.
A continuous image of a compact space is compact.
The intersection of any non-empty collection of compact subsets of a Hausdorff space is compact (and closed);
If is not Hausdorff then the intersection of two compact subsets may fail to be compact (see footnote for example).
The product of any collection of compact spaces is compact. (This is Tychonoff's theorem, which is equivalent to the axiom of choice.)
In a metrizable space, a subset is compact if and only if it is sequentially compact (assuming countable choice)
A finite set endowed with any topology is compact.
Properties of compact spaces
A compact subset of a Hausdorff space is closed.
If is not Hausdorff then a compact subset of may fail to be a closed subset of (see footnote for example).
If is not Hausdorff then the closure of a compact set may fail to be compact (see footnote for example).
In any topological vector space (TVS), a compact subset is complete. However, every non-Hausdorff TVS contains compact (and thus complete) subsets that are not closed.
If and are disjoint compact subsets of a Hausdorff space , then there exist disjoint open sets and in such that and .
A continuous bijection from a compact space into a Hausdorff space is a homeomorphism.
A compact Hausdorff space is normal and regular.
If a space is compact and Hausdorff, then no finer topology on is compact and no coarser topology on is Hausdorff.
If a subset of a metric space is compact then it is -bounded.
Functions and compact spaces
Since a continuous image of a compact space is compact, the extreme value theorem holds for such spaces: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum.
(Slightly more generally, this is true for an upper semicontinuous function.) As a sort of converse to the above statements, the pre-image of a compact space under a proper map is compact.
Compactifications
Every topological space is an open dense subspace of a compact space having at most one point more than , by the Alexandroff one-point compactification.
By the same construction, every locally compact Hausdorff space is an open dense subspace of a compact Hausdorff space having at most one point more than .
Ordered compact spaces
A nonempty compact subset of the real numbers has a greatest element and a least element.
Let be a simply ordered set endowed with the order topology.
Then is compact if and only if is a complete lattice (i.e. all subsets have suprema and infima).
Examples
Any finite topological space, including the empty set, is compact. More generally, any space with a finite topology (only finitely many open sets) is compact; this includes in particular the trivial topology.
Any space carrying the cofinite topology is compact.
Any locally compact Hausdorff space can be turned into a compact space by adding a single point to it, by means of Alexandroff one-point compactification. The one-point compactification of is homeomorphic to the circle ; the one-point compactification of is homeomorphic to the sphere . Using the one-point compactification, one can also easily construct compact spaces which are not Hausdorff, by starting with a non-Hausdorff space.
The right order topology or left order topology on any bounded totally ordered set is compact. In particular, Sierpiński space is compact.
No discrete space with an infinite number of points is compact. The collection of all singletons of the space is an open cover which admits no finite subcover. Finite discrete spaces are compact.
In carrying the lower limit topology, no uncountable set is compact.
In the cocountable topology on an uncountable set, no infinite set is compact. Like the previous example, the space as a whole is not locally compact but is still Lindelöf.
The closed unit interval is compact. This follows from the Heine–Borel theorem. The open interval is not compact: the open cover for does not have a finite subcover. Similarly, the set of rational numbers in the closed interval is not compact: the sets of rational numbers in the intervals cover all the rationals in [0, 1] for but this cover does not have a finite subcover. Here, the sets are open in the subspace topology even though they are not open as subsets of .
The set of all real numbers is not compact as there is a cover of open intervals that does not have a finite subcover. For example, intervals , where takes all integer values in , cover but there is no finite subcover.
On the other hand, the extended real number line carrying the analogous topology is compact; note that the cover described above would never reach the points at infinity and thus would not cover the extended real line. In fact, the set has the homeomorphism to [−1, 1] of mapping each infinity to its corresponding unit and every real number to its sign multiplied by the unique number in the positive part of interval that results in its absolute value when divided by one minus itself, and since homeomorphisms preserve covers, the Heine-Borel property can be inferred.
For every natural number , the -sphere is compact. Again from the Heine–Borel theorem, the closed unit ball of any finite-dimensional normed vector space is compact. This is not true for infinite dimensions; in fact, a normed vector space is finite-dimensional if and only if its closed unit ball is compact.
On the other hand, the closed unit ball of the dual of a normed space is compact for the weak-* topology. (Alaoglu's theorem)
The Cantor set is compact. In fact, every compact metric space is a continuous image of the Cantor set.
Consider the set of all functions from the real number line to the closed unit interval, and define a topology on so that a sequence in converges towards if and only if converges towards for all real numbers . There is only one such topology; it is called the topology of pointwise convergence or the product topology. Then is a compact topological space; this follows from the Tychonoff theorem.
A subset of the Banach space of real-valued continuous functions on a compact Hausdorff space is relatively compact if and only if it is equicontinuous and pointwise bounded (Arzelà–Ascoli theorem).
Consider the set of all functions satisfying the Lipschitz condition for all . Consider on the metric induced by the uniform distance Then by the Arzelà–Ascoli theorem the space is compact.
The spectrum of any bounded linear operator on a Banach space is a nonempty compact subset of the complex numbers . Conversely, any compact subset of arises in this manner, as the spectrum of some bounded linear operator. For instance, a diagonal operator on the Hilbert space may have any compact nonempty subset of as spectrum.
The space of Borel probability measures on a compact Hausdorff space is compact for the vague topology, by the Alaoglu theorem.
A collection of probability measures on the Borel sets of Euclidean space is called tight if, for any positive epsilon, there exists a compact subset containing all but at most epsilon of the mass of each of the measures. Helly's theorem then asserts that a collection of probability measures is relatively compact for the vague topology if and only if it is tight.
Algebraic examples
Topological groups such as an orthogonal group are compact, while groups such as a general linear group are not.
Since the -adic integers are homeomorphic to the Cantor set, they form a compact set.
Any global field K is a discrete additive subgroup of its adele ring, and the quotient space is compact. This was used in John Tate's thesis to allow harmonic analysis to be used in number theory.
The spectrum of any commutative ring with the Zariski topology (that is, the set of all prime ideals) is compact, but never Hausdorff (except in trivial cases). In algebraic geometry, such topological spaces are examples of quasi-compact schemes, "quasi" referring to the non-Hausdorff nature of the topology.
The spectrum of a Boolean algebra is compact, a fact which is part of the Stone representation theorem. Stone spaces, compact totally disconnected Hausdorff spaces, form the abstract framework in which these spectra are studied. Such spaces are also useful in the study of profinite groups.
The structure space of a commutative unital Banach algebra is a compact Hausdorff space.
The Hilbert cube is compact, again a consequence of Tychonoff's theorem.
A profinite group (e.g. Galois group) is compact.
| Mathematics | Geometry | null |
6056 | https://en.wikipedia.org/wiki/Continental%20drift | Continental drift | Continental drift is a highly supported scientific theory, originating in the early 20th century, that Earth's continents move or drift relative to each other over geologic time. The theory of continental drift has since been validated and incorporated into the science of plate tectonics, which studies the movement of the continents as they ride on plates of the Earth's lithosphere.
The speculation that continents might have "drifted" was first put forward by Abraham Ortelius in 1596. A pioneer of the modern view of mobilism was the Austrian geologist Otto Ampferer. The concept was independently and more fully developed by Alfred Wegener in his 1915 publication, "The Origin of Continents and Oceans". However, at that time his hypothesis was rejected by many for lack of any motive mechanism. In 1931, the English geologist Arthur Holmes proposed mantle convection for that mechanism.
History
Early history
Abraham Ortelius , Theodor Christoph Lilienthal (1756), Alexander von Humboldt (1801 and 1845), Antonio Snider-Pellegrini , and others had noted earlier that the shapes of continents on opposite sides of the Atlantic Ocean (most notably, Africa and South America) seem to fit together. W. J. Kious described Ortelius's thoughts in this way:
In 1889, Alfred Russel Wallace remarked, "It was formerly a very general belief, even amongst geologists, that the great features of the earth's surface, no less than the smaller ones, were subject to continual mutations, and that during the course of known geological time the continents and great oceans had, again and again, changed places with each other." He quotes Charles Lyell as saying, "Continents, therefore, although permanent for whole geological epochs, shift their positions entirely in the course of ages." and claims that the first to throw doubt on this was James Dwight Dana in 1849.
In his Manual of Geology (1863), Dana wrote, "The continents and oceans had their general outline or form defined in earliest time. This has been proved with regard to North America from the position and distribution of the first beds of the Lower Silurian, – those of the Potsdam epoch. The facts indicate that the continent of North America had its surface near tide-level, part above and part below it (p.196); and this will probably be proved to be the condition in Primordial time of the other continents also. And, if the outlines of the continents were marked out, it follows that the outlines of the oceans were no less so". Dana was enormously influential in America—his Manual of Mineralogy is still in print in revised form—and the theory became known as the Permanence theory.
This appeared to be confirmed by the exploration of the deep sea beds conducted by the Challenger expedition, 1872–1876, which showed that contrary to expectation, land debris brought down by rivers to the ocean is deposited comparatively close to the shore on what is now known as the continental shelf. This suggested that the oceans were a permanent feature of the Earth's surface, rather than them having "changed places" with the continents.
Eduard Suess had proposed a supercontinent Gondwana in 1885 and the Tethys Ocean in 1893, assuming a land-bridge between the present continents submerged in the form of a geosyncline, and John Perry had written an 1895 paper proposing that the Earth's interior was fluid, and disagreeing with Lord Kelvin on the age of the Earth.
Wegener and his predecessors
Apart from the earlier speculations mentioned above, the idea that the American continents had once formed a single landmass with Eurasia and Africa was postulated by several scientists before Alfred Wegener's 1912 paper. Although Wegener's theory was formed independently and was more complete than those of his predecessors, Wegener later credited a number of past authors with similar ideas: Franklin Coxworthy (between 1848 and 1890), Roberto Mantovani (between 1889 and 1909), William Henry Pickering (1907) and Frank Bursley Taylor (1908).
The similarity of southern continent geological formations had led Roberto Mantovani to conjecture in 1889 and 1909 that all the continents had once been joined into a supercontinent; Wegener noted the similarity of Mantovani's and his own maps of the former positions of the southern continents. In Mantovani's conjecture, this continent broke due to volcanic activity caused by thermal expansion, and the new continents drifted away from each other because of further expansion of the rip-zones, where the oceans now lie. This led Mantovani to propose a now-discredited Expanding Earth theory.
Continental drift without expansion was proposed by Frank Bursley Taylor, who suggested in 1908 (published in 1910) that the continents were moved into their present positions by a process of "continental creep", later proposing a mechanism of increased tidal forces during the Cretaceous dragging the crust towards the equator. He was the first to realize that one of the effects of continental motion would be the formation of mountains, attributing the formation of the Himalayas to the collision between the Indian subcontinent with Asia. Wegener said that of all those theories, Taylor's had the most similarities to his own. For a time in the mid-20th century, the theory of continental drift was referred to as the "Taylor-Wegener hypothesis".
Alfred Wegener first presented his hypothesis to the German Geological Society on 6 January 1912. He proposed that the continents had once formed a single landmass, called Pangaea, before breaking apart and drifting to their present locations.
Wegener was the first to use the phrase "continental drift" (1912, 1915) () and to publish the hypothesis that the continents had somehow "drifted" apart. Although he presented much evidence for continental drift, he was unable to provide a convincing explanation for the physical processes which might have caused this drift. He suggested that the continents had been pulled apart by the centrifugal pseudoforce () of the Earth's rotation or by a small component of astronomical precession, but calculations showed that the force was not sufficient. The hypothesis was also studied by Paul Sophus Epstein in 1920 and found to be implausible.
Rejection of Wegener's theory, 1910s–1950s
Although now accepted, and even with a minority of scientific proponents over the decades, the theory of continental drift was largely rejected for many years, with evidence in its favor considered insufficient. One problem was that a plausible driving force was missing. A second problem was that Wegener's estimate of the speed of continental motion, , was implausibly high. (The currently accepted rate for the separation of the Americas from Europe and Africa is about .) Furthermore, Wegener was treated less seriously because he was not a geologist. Even today, the details of the forces propelling the plates are poorly understood.
The English geologist Arthur Holmes championed the theory of continental drift at a time when it was deeply unfashionable. He proposed in 1931 that the Earth's mantle contained convection cells which dissipated heat produced by radioactive decay and moved the crust at the surface. His Principles of Physical Geology, ending with a chapter on continental drift, was published in 1944.
Geological maps of the time showed huge land bridges spanning the Atlantic and Indian oceans to account for the similarities of fauna and flora and the divisions of the Asian continent in the Permian period, but failing to account for glaciation in India, Australia and South Africa.
The fixists
Hans Stille and Leopold Kober opposed the idea of continental drift and worked on a "fixist" geosyncline model with Earth contraction playing a key role in the formation of orogens. Other geologists who opposed continental drift were Bailey Willis, Charles Schuchert, Rollin Chamberlin, Walther Bucher and Walther Penck. In 1939 an international geological conference was held in Frankfurt. This conference came to be dominated by the fixists, especially as those geologists specializing in tectonics were all fixists except Willem van der Gracht. Criticism of continental drift and mobilism was abundant at the conference not only from tectonicists but also from sedimentological (Nölke), paleontological (Nölke), mechanical (Lehmann) and oceanographic (Troll, Wüst) perspectives. Hans Cloos, the organizer of the conference, was also a fixist who together with Troll held the view that excepting the Pacific Ocean continents were not radically different from oceans in their behaviour. The mobilist theory of Émile Argand for the Alpine orogeny was criticized by Kurt Leuchs. The few drifters and mobilists at the conference appealed to biogeography (Kirsch, Wittmann), paleoclimatology (Wegener, K), paleontology (Gerth) and geodetic measurements (Wegener, K). F. Bernauer correctly equated Reykjanes in south-west Iceland with the Mid-Atlantic Ridge, arguing with this that the floor of the Atlantic Ocean was undergoing extension just like Reykjanes. Bernauer thought this extension had drifted the continents only apart, the approximate width of the volcanic zone in Iceland.
David Attenborough, who attended university in the second half of the 1940s, recounted an incident illustrating its lack of acceptance then: "I once asked one of my lecturers why he was not talking to us about continental drift and I was told, sneeringly, that if I could prove there was a force that could move continents, then he might think about it. The idea was moonshine, I was informed."
As late as 1953—just five years before Carey introduced the theory of plate tectonics—the theory of continental drift was rejected by the physicist Scheidegger on the following grounds.
First, it had been shown that floating masses on a rotating geoid would collect at the equator, and stay there. This would explain one, but only one, mountain building episode between any pair of continents; it failed to account for earlier orogenic episodes.
Second, masses floating freely in a fluid substratum, like icebergs in the ocean, should be in isostatic equilibrium (in which the forces of gravity and buoyancy are in balance). But gravitational measurements showed that many areas are not in isostatic equilibrium.
Third, there was the problem of why some parts of the Earth's surface (crust) should have solidified while other parts were still fluid. Various attempts to explain this foundered on other difficulties.
Road to acceptance
From the 1930s to the late 1950s, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force. Holmes's views were particularly influential: in his bestselling textbook, Principles of Physical Geology, he included a chapter on continental drift, proposing that Earth's mantle contained convection cells which dissipated radioactive heat and moved the crust at the surface. Holmes's proposal resolved the phase disequilibrium objection (the underlying fluid was kept from solidifying by radioactive heating from the core). However, scientific communication in the 1930s and 1940s was inhibited by World War II, and the theory still required work to avoid foundering on the orogeny and isostasy objections. Worse, the most viable forms of the theory predicted the existence of convection cell boundaries reaching deep into the Earth, that had yet to be observed.
In 1947, a team of scientists led by Maurice Ewing confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the sediments was chemically and physically different from continental crust. As oceanographers continued to bathymeter the ocean basins, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift".
Meanwhile, scientists began recognizing odd magnetic variations across the ocean floor using devices developed during World War II to detect submarines. Over the next decade, it became increasingly clear that the magnetization patterns were not anomalies, as had been originally supposed. In a series of papers published between 1959 and 1963, Heezen, Dietz, Hess, Mason, Vine, Matthews, and Morley collectively realized that the magnetization of the ocean floor formed extensive, zebra-like patterns: one stripe would exhibit normal polarity and the adjoining stripes reversed polarity. The best explanation was the "conveyor belt" or Vine–Matthews–Morley hypothesis. New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. The new crust is magnetized by the Earth's magnetic field, which undergoes occasional reversals. Formation of new crust then displaces the magnetized crust apart, akin to a conveyor belt – hence the name.
Without workable alternatives to explain the stripes, geophysicists were forced to conclude that Holmes had been right: ocean rifts were sites of perpetual orogeny at the boundaries of convection cells. By 1967, barely two decades after discovery of the mid-oceanic rifts, and a decade after discovery of the striping, plate tectonics had become axiomatic to modern geophysics.
In addition, Marie Tharp, in collaboration with Bruce Heezen, who was initially sceptical of Tharp's observations that her maps confirmed continental drift theory, provided essential corroboration, using her skills in cartography and seismographic data, to confirm the theory.
Modern evidence
Geophysicist Jack Oliver is credited with providing seismologic evidence supporting plate tectonics which encompassed and superseded continental drift with the article "Seismology and the New Global Tectonics", published in 1968, using data collected from seismologic stations, including those he set up in the South Pacific. The modern theory of plate tectonics, refining Wegener, explains that there are two kinds of crust of different composition: continental crust and oceanic crust, both floating above a much deeper "plastic" mantle. Continental crust is inherently lighter. Oceanic crust is created at spreading centers, and this, along with subduction, drives the system of plates in a chaotic manner, resulting in continuous orogeny and areas of isostatic imbalance.
Evidence for the movement of continents on tectonic plates is now extensive. Similar plant and animal fossils are found around the shores of different continents, suggesting that they were once joined. The fossils of Mesosaurus, a freshwater reptile rather like a small crocodile, found both in Brazil and South Africa, are one example; another is the discovery of fossils of the land reptile Lystrosaurus in rocks of the same age at locations in Africa, India, and Antarctica. There is also living evidence, with the same animals being found on two continents. Some earthworm families (such as Ocnerodrilidae, Acanthodrilidae, Octochaetidae) are found in South America and Africa.
The complementary arrangement of the facing sides of South America and Africa is an obvious and temporary coincidence. In millions of years, slab pull, ridge-push, and other forces of tectonophysics will further separate and rotate those two continents. It was that temporary feature that inspired Wegener to study what he defined as continental drift although he did not live to see his hypothesis generally accepted.
The widespread distribution of Permo-Carboniferous glacial sediments in South America, Africa, Madagascar, Arabia, India, Antarctica and Australia was one of the major pieces of evidence for the theory of continental drift. The continuity of glaciers, inferred from oriented glacial striations and deposits called tillites, suggested the existence of the supercontinent of Gondwana, which became a central element of the concept of continental drift. Striations indicated glacial flow away from the equator and toward the poles, based on continents' current positions and orientations, and supported the idea that the southern continents had previously been in dramatically different locations that were contiguous with one another.
GPS evidence
In measuring continental drift with GPS relative to some other locations whose positions were measured by GPS, a GPS device located in Maui, Hawaii moved about 48 cm latitudinally and about 84 cm longitudinally during a time of 14 years.
| Physical sciences | Tectonics | Earth science |
6058 | https://en.wikipedia.org/wiki/Collagen | Collagen | Collagen () is the main structural protein in the extracellular matrix of the connective tissues of many animals. It is the most abundant protein in mammals, making up 25% to 35% of protein content. Amino acids are bound together to form a triple helix of elongated fibril known as a collagen helix. It is mostly found in cartilage, bones, tendons, ligaments, and skin. Vitamin C is vital for collagen synthesis, while Vitamin E improves its production.
Depending on the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes 1% to 2% of muscle tissue and 6% by weight of skeletal muscle. The fibroblast is the most common cell creating collagen in animals. Gelatin, which is used in food and industry, is collagen that was irreversibly hydrolyzed using heat, basic solutions, or weak acids.
Etymology
The name collagen comes from the Greek κόλλα (kólla), meaning "glue", and suffix -γέν, -gen, denoting "producing".
Types
As of 2011, 28 types of human collagen have been identified, described, and classified according to their structure. This diversity shows collagen's diverse functionality. All of the types contain at least one triple helix. Over 90% of the collagen in humans is type I & III collagen.
Fibrillar (type I, II, III, V, XI)
Non-fibrillar
FACIT (fibril-associated collagens with interrupted triple helices) (types IX, XII, XIV, XIX, XXI)
Short-chain (types VIII, X)
Basement membrane (type IV)
Multiplexin (multiple triple helix domains with interruptions) (types XV, XVIII)
MACIT (membrane-associated collagens with interrupted triple helices) (types XIII, XVII)
Microfibril-forming (type VI)
Anchoring fibrils (type VII)
The five most common types are:
Type I: skin, tendon, vasculature, organs, bone (main component of the organic part of bone)
Type II: cartilage (main collagenous component of cartilage)
Type III: reticulate (main component of reticular fibers), commonly found alongside type I
Type IV: forms basal lamina, the epithelium-secreted layer of the basement membrane
Type V: cell surfaces, hair, and placenta
In humans
Cardiac
The collagenous cardiac skeleton which includes the four heart valve rings, is histologically, elastically and uniquely bound to cardiac muscle. The cardiac skeleton also includes the separating septa of the heart chambers – the interventricular septum and the atrioventricular septum. Collagen contribution to the measure of cardiac performance summarily represents a continuous torsional force opposed to the fluid mechanics of blood pressure emitted from the heart. The collagenous structure that divides the upper chambers of the heart from the lower chambers is an impermeable membrane that excludes both blood and electrical impulses through typical physiological means. With support from collagen, atrial fibrillation never deteriorates to ventricular fibrillation. Collagen is layered in variable densities with smooth muscle mass. The mass, distribution, age, and density of collagen all contribute to the compliance required to move blood back and forth. Individual cardiac valvular leaflets are folded into shape by specialized collagen under variable pressure. Gradual calcium deposition within collagen occurs as a natural function of aging. Calcified points within collagen matrices show contrast in a moving display of blood and muscle, enabling methods of cardiac imaging technology to arrive at ratios essentially stating blood in (cardiac input) and blood out (cardiac output). Pathology of the collagen underpinning of the heart is understood within the category of connective tissue disease.
Bone grafts
As the skeleton forms the structure of the body, it is vital that it maintains its strength, even after breaks and injuries. Collagen is used in bone grafting because its triple-helix structure makes it a very strong molecule. It is ideal for use in bones, as it does not compromise the structural integrity of the skeleton. The triple helical structure prevents collagen from being broken down by enzymes, it enables adhesiveness of cells and it is important for the proper assembly of the extracellular matrix.
Tissue regeneration
Collagen scaffolds are used in tissue regeneration, whether in sponges, thin sheets, gels, or fibers. Collagen has favorable properties for tissue regeneration, such as pore structure, permeability, hydrophilicity, and stability in vivo. Collagen scaffolds also support deposition of cells, such as osteoblasts and fibroblasts, and once inserted, facilitate growth to proceed normally.
Reconstructive surgery
Collagens are widely used in the construction of artificial skin substitutes used for managing severe burns and wounds. These collagens may be derived from cow, horse, pig, or even human sources; and are sometimes used in combination with silicones, glycosaminoglycans, fibroblasts, growth factors and other substances.
Wound healing
Collagen is one of the body's key natural resources and a component of skin tissue that can benefit all stages of wound healing. When collagen is made available to the wound bed, closure can occur. This avoids wound deterioration and procedures such as amputation.
Collagen is used as a natural wound dressing because it has properties that artificial wound dressings do not have. It resists bacteria, which is vitally important in wound dressing. As a burn dressing, collagen helps it heal fast by helping granulation tissue to grow over the burn.
Throughout the four phases of wound healing, collagen performs the following functions:
Guiding: collagen fibers guide fibroblasts because they migrate along a connective tissue matrix.
Chemotaxis: collagen fibers have a large surface area which attracts fibrogenic cells which help healing.
Nucleation: in the presence of certain neutral salt molecules, collagen can act as a nucleating agent causing formation of fibrillar structures.
Hemostasis: Blood platelets interact with the collagen to make a hemostatic plug.
Use in basic research
Collagen is used in laboratory studies for cell culture, studying cell behavior and cellular interactions with the extracellular environment. Collagen is also widely used as a bioink for 3D bioprinting and biofabrication of 3D tissue models.
Biology
The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2). The amino acid composition of collagen is atypical for proteins, particularly with respect to its high hydroxyproline content. The most common motifs in collagen's amino acid sequence are glycine-proline-X and glycine-X-hydroxyproline, where X is any amino acid other than glycine, proline or hydroxyproline.
The table below lists average amino acid composition for fish and mammal skin.
Synthesis
First, a three-dimensional stranded structure is assembled, mostly composed of the amino acids glycine and proline. This is the collagen precursor procollagen. Then, procollagen is modified by the addition of hydroxyl groups to the amino acids proline and lysine. This step is important for later glycosylation and the formation of collagen's triple helix structure. Because the hydroxylase enzymes performing these reactions require vitamin C as a cofactor, a long-term deficiency in this vitamin results in impaired collagen synthesis and scurvy. These hydroxylation reactions are catalyzed by the enzymes prolyl 4-hydroxylase and lysyl hydroxylase. The reaction consumes one ascorbate molecule per hydroxylation. Collagen synthesis occurs inside and outside cells.
The most common form of collagen is fibrillary collagen. Another common form is meshwork collagen, which is often involved in the formation of filtration systems. All types of collagen are triple helices, but differ in the make-up of their alpha peptides created in step 2. Below we discuss the formation of fibrillary collagen.
Transcription of mRNA: Synthesis begins with turning on genes associated with the formation of a particular alpha peptide (typically alpha 1, 2 or 3). About 44 genes are associated with collagen formation, each coding for a specific mRNA sequence, and are typically named with the "COL" prefix.
Pre-pro-peptide formation: The created mRNA exits the cell nucleus into the cytoplasm. There, it links with the ribosomal subunits and is translated into a peptide. The peptide goes into the endoplasmic reticulum for post-translational processing. It is directed there by a signal recognition particle on the endoplasmic reticulum, which recognizes the peptide's signal sequence (the early part of the sequence). The processed product is a pre-pro-peptide called preprocollagen.
Pro-collagen formation: Three modifications of the pre-pro-peptide form the alpha peptide:
The signal peptide on the N-terminal is removed. The molecule is now called propeptide.
Lysines and prolines are hydroxylated by the enzymes 'prolyl hydroxylase' and 'lysyl hydroxylase', producing hydroxyproline and hydroxylysine. This helps in cross-linking the alpha peptides. This enzymatic step requires vitamin C as a cofactor. In scurvy, the lack of hydroxylation of prolines and lysines causes a looser triple helix (which is formed by three alpha peptides).
Glycosylation occurs by adding either glucose or galactose monomers onto the hydroxyl groups that were placed onto lysines, but not on prolines.
Three of the hydroxylated and glycosylated propeptides twist into a triple helix (except for its ends), forming procollagen. It is packaged into a transfer vesicle destined for the Golgi apparatus.
Modification and secretion: In the Golgi apparatus, the procollagen goes through one last post-translational modification, adding oligosaccharides (not monosaccharides as in step 3). Then it is packaged into a secretory vesicle to be secreted from the cell.
Tropocollagen formation: Outside the cell, membrane-bound enzymes called collagen peptidases remove the unwound ends of the molecule, producing tropocollagen. Defects in this step produce various collagenopathies called Ehlers–Danlos syndrome. This step is absent when synthesizing type III, a type of fibrillar collagen.
Collagen fibril formation: Lysyl oxidase, a copper-dependent enzyme, acts on lysines and hydroxylysines, producing aldehyde groups, which eventually form covalent bonds between tropocollagen molecules. This polymer of tropocollagen is called a collagen fibril.
Amino acids
Collagen has an unusual amino acid composition and sequence:
Glycine is found at almost every third residue.
Proline makes up about 17% of collagen.
Collagen contains two unusual derivative amino acids not directly inserted during translation. These amino acids are found at specific locations relative to glycine and are modified post-translationally by different enzymes, both of which require vitamin C as a cofactor.
Hydroxyproline derived from proline
Hydroxylysine derived from lysine – depending on the type of collagen, varying numbers of hydroxylysines are glycosylated (mostly having disaccharides attached).
Cortisol stimulates degradation of (skin) collagen into amino acids.
Collagen I formation
Most collagen forms in a similar manner, but the following process is typical for type I:
Inside the cell
Two types of alpha chains – alpha-1 and alpha 2, are formed during translation on ribosomes along the rough endoplasmic reticulum (RER). These peptide chains known as preprocollagen, have registration peptides on each end and a signal peptide.
Polypeptide chains are released into the lumen of the RER.
Signal peptides are cleaved inside the RER and the chains are now known as pro-alpha chains.
Hydroxylation of lysine and proline amino acids occurs inside the lumen. This process is dependent on and consumes ascorbic acid (vitamin C) as a cofactor.
Glycosylation of specific hydroxylysine residues occurs.
Triple alpha helical structure is formed inside the endoplasmic reticulum from two alpha-1 chains and one alpha-2 chain.
Procollagen is shipped to the Golgi apparatus, where it is packaged and secreted into extracellular space by exocytosis.
Outside the cell
Registration peptides are cleaved and tropocollagen is formed by procollagen peptidase.
Multiple tropocollagen molecules form collagen fibrils, via covalent cross-linking (aldol reaction) by lysyl oxidase which links hydroxylysine and lysine residues. Multiple collagen fibrils form into collagen fibers.
Collagen may be attached to cell membranes via several types of protein, including fibronectin, laminin, fibulin and integrin.
Molecular structure
A single collagen molecule, tropocollagen, is used to make up larger collagen aggregates, such as fibrils. It is approximately 300 nm long and 1.5 nm in diameter, and it is made up of three polypeptide strands (called alpha peptides, see step 2), each of which has the conformation of a left-handed helix – this should not be confused with the right-handed alpha helix. These three left-handed helices are twisted together into a right-handed triple helix or "super helix", a cooperative quaternary structure stabilized by many hydrogen bonds. With type I collagen and possibly all fibrillar collagens, if not all collagens, each triple-helix associates into a right-handed super-super-coil referred to as the collagen microfibril. Each microfibril is interdigitated with its neighboring microfibrils to a degree that might suggest they are individually unstable, although within collagen fibrils, they are so well ordered as to be crystalline.
A distinctive feature of collagen is the regular arrangement of amino acids in each of the three chains of these collagen subunits. The sequence often follows the pattern Gly-Pro-X or Gly-X-Hyp, where X may be any of various other amino acid residues. Proline or hydroxyproline constitute about 1/6 of the total sequence. With glycine accounting for the 1/3 of the sequence, this means approximately half of the collagen sequence is not glycine, proline or hydroxyproline, a fact often missed due to the distraction of the unusual GX1X2 character of collagen alpha-peptides. The high glycine content of collagen is important with respect to stabilization of the collagen helix as this allows the very close association of the collagen fibers within the molecule, facilitating hydrogen bonding and the formation of intermolecular cross-links. This kind of regular repetition and high glycine content is found in only a few other fibrous proteins, such as silk fibroin.
Collagen is not only a structural protein. Due to its key role in the determination of cell phenotype, cell adhesion, tissue regulation, and infrastructure, many sections of its non-proline-rich regions have cell or matrix association/regulation roles. The relatively high content of proline and hydroxyproline rings, with their geometrically constrained carboxyl and (secondary) amino groups, along with the rich abundance of glycine, accounts for the tendency of the individual polypeptide strands to form left-handed helices spontaneously, without any intrachain hydrogen bonding.
Because glycine is the smallest amino acid with no side chain, it plays a unique role in fibrous structural proteins. In collagen, Gly is required at every third position because the assembly of the triple helix puts this residue at the interior (axis) of the helix, where there is no space for a larger side group than glycine's single hydrogen atom. For the same reason, the rings of the Pro and Hyp must point outward. These two amino acids help stabilize the triple helix – Hyp even more so than Pro; a lower concentration of them is required in animals such as fish, whose body temperatures are lower than most warm-blooded animals. Lower proline and hydroxyproline contents are characteristic of cold-water, but not warm-water fish; the latter tend to have similar proline and hydroxyproline contents to mammals. The lower proline and hydroxyproline contents of cold-water fish and other poikilotherm animals leads to their collagen having a lower thermal stability than mammalian collagen. This lower thermal stability means that gelatin derived from fish collagen is not suitable for many food and industrial applications.
The tropocollagen subunits spontaneously self-assemble, with regularly staggered ends, into even larger arrays in the extracellular spaces of tissues. Additional assembly of fibrils is guided by fibroblasts, which deposit fully formed fibrils from fibripositors. In the fibrillar collagens, molecules are staggered to adjacent molecules by about 67 nm (a unit that is referred to as 'D' and changes depending upon the hydration state of the aggregate). In each D-period repeat of the microfibril, there is a part containing five molecules in cross-section, called the "overlap", and a part containing only four molecules, called the "gap". These overlap and gap regions are retained as microfibrils assemble into fibrils, and are thus viewable using electron microscopy. The triple helical tropocollagens in the microfibrils are arranged in a quasihexagonal packing pattern.
There is some covalent crosslinking within the triple helices and a variable amount of covalent crosslinking between tropocollagen helices forming well-organized aggregates (such as fibrils). Larger fibrillar bundles are formed with the aid of several different classes of proteins (including different collagen types), glycoproteins, and proteoglycans to form the different types of mature tissues from alternate combinations of the same key players. Collagen's insolubility was a barrier to the study of monomeric collagen until it was found that tropocollagen from young animals can be extracted because it is not yet fully crosslinked. However, advances in microscopy techniques (i.e. electron microscopy (EM) and atomic force microscopy (AFM)) and X-ray diffraction have enabled researchers to obtain increasingly detailed images of collagen structure in situ. These later advances are particularly important to better understanding the way in which collagen structure affects cell–cell and cell–matrix communication and how tissues are constructed in growth and repair and changed in development and disease. For example, using AFM–based nanoindentation it has been shown that a single collagen fibril is a heterogeneous material along its axial direction with significantly different mechanical properties in its gap and overlap regions, correlating with its different molecular organizations in these two regions.
Collagen fibrils/aggregates are arranged in different combinations and concentrations in various tissues to provide varying tissue properties. In bone, entire collagen triple helices lie in a parallel, staggered array. 40 nm gaps between the ends of the tropocollagen subunits (approximately equal to the gap region) probably serve as nucleation sites for the deposition of long, hard, fine crystals of the mineral component, which is hydroxylapatite (approximately) Ca10(OH)2(PO4)6. Type I collagen gives bone its tensile strength.
Associated disorders
Collagen-related diseases most commonly arise from genetic defects or nutritional deficiencies that affect the biosynthesis, assembly, posttranslational modification, secretion, or other processes involved in normal collagen production.
In addition to the above-mentioned disorders, excessive deposition of collagen occurs in scleroderma.
Diseases
One thousand mutations have been identified in 12 out of more than 20 types of collagen. These mutations can lead to various diseases at the tissue level.
Osteogenesis imperfecta – Caused by a mutation in type 1 collagen, dominant autosomal disorder, results in weak bones and irregular connective tissue, some cases can be mild while others can be lethal. Mild cases have lowered levels of collagen type 1 while severe cases have structural defects in collagen.
Chondrodysplasias – Skeletal disorder believed to be caused by a mutation in type 2 collagen, further research is being conducted to confirm this.
Ehlers–Danlos syndrome – Thirteen different types of this disorder, which lead to deformities in connective tissue, are known. Some of the rarer types can be lethal, leading to the rupture of arteries. Each syndrome is caused by a different mutation. For example, the vascular type (vEDS) of this disorder is caused by a mutation in collagen type 3.
Alport syndrome – Can be passed on genetically, usually as X-linked dominant, but also as both an autosomal dominant and autosomal recessive disorder, those with the condition have problems with their kidneys and eyes, loss of hearing can also develop during the childhood or adolescent years.
Knobloch syndrome – Caused by a mutation in the COL18A1 gene that codes for the production of collagen XVIII. Patients present with protrusion of the brain tissue and degeneration of the retina; an individual who has family members with the disorder is at an increased risk of developing it themselves since there is a hereditary link.
Animal harvesting
When not synthesized, collagen can be harvested from animal skin. This has led to deforestation as has occurred in Paraguay where large collagen producers buy large amounts of cattle hides from regions that have been clear-cut for cattle grazing.
Characteristics
Collagen is one of the long, fibrous structural proteins whose functions are quite different from those of globular proteins, such as enzymes. Tough bundles of collagen called collagen fibers are a major component of the extracellular matrix that supports most tissues and gives cells structure from the outside, but collagen is also found inside certain cells. Collagen has great tensile strength, and is the main component of fascia, cartilage, ligaments, tendons, bone and skin. Along with elastin and soft keratin, it is responsible for skin strength and elasticity, and its degradation leads to wrinkles that accompany aging. It strengthens blood vessels and plays a role in tissue development. It is present in the cornea and lens of the eye in crystalline form. It may be one of the most abundant proteins in the fossil record, given that it appears to fossilize frequently, even in bones from the Mesozoic and Paleozoic.
Mechanical properties
Collagen is a complex hierarchical material with mechanical properties that vary significantly across different scales.
On the molecular scale, atomistic and course-grained modeling simulations, as well as numerous experimental methods, have led to several estimates of the Young's modulus of collagen at the molecular level. Only above a certain strain rate is there a strong relationship between elastic modulus and strain rate, possibly due to the large number of atoms in a collagen molecule. The length of the molecule is also important, where longer molecules have lower tensile strengths than shorter ones due to short molecules having a large proportion of hydrogen bonds being broken and reformed.
On the fibrillar scale, collagen has a lower modulus compared to the molecular scale, and varies depending on geometry, scale of observation, deformation state, and hydration level. By increasing the crosslink density from zero to 3 per molecule, the maximum stress the fibril can support increases from 0.5 GPa to 6 GPa.
Limited tests have been done on the tensile strength of the collagen fiber, but generally it has been shown to have a lower Young's modulus compared to fibrils.
When studying the mechanical properties of collagen, tendon is often chosen as the ideal material because it is close to a pure and aligned collagen structure. However, at the macro, tissue scale, the vast number of structures that collagen fibers and fibrils can be arranged into results in highly variable properties. For example, tendon has primarily parallel fibers, whereas skin consists of a net of wavy fibers, resulting in a much higher strength and lower ductility in tendon compared to skin. The mechanical properties of collagen at multiple hierarchical levels is given.
Collagen is known to be a viscoelastic solid. When the collagen fiber is modeled as two Kelvin-Voigt models in series, each consisting of a spring and a dashpot in parallel, the strain in the fiber can be modeled according to the following equation:
where α, β, and γ are defined materials properties, εD is fibrillar strain, and εT is total strain.
Uses
Collagen has a wide variety of applications, from food to medical. In the medical industry, it is used in cosmetic surgery and burn surgery. In the food sector, one use example is in casings for sausages.
If collagen is subject to sufficient denaturation, such as by heating, the three tropocollagen strands separate partially or completely into globular domains, containing a different secondary structure to the normal collagen polyproline II (PPII) of random coils. This process describes the formation of gelatin, which is used in many foods, including flavored gelatin desserts. Besides food, gelatin has been used in pharmaceutical, cosmetic, and photography industries. It is also used as a dietary supplement, and has been advertised as a potential remedy against the ageing process.
From the Greek for glue, kolla, the word collagen means "glue producer" and refers to the early process of boiling the skin and sinews of horses and other animals to obtain glue. Collagen adhesive was used by Egyptians about 4,000 years ago, and Native Americans used it in bows about 1,500 years ago. The oldest glue in the world, carbon-dated as more than 8,000 years old, was found to be collagen – used as a protective lining on rope baskets and embroidered fabrics, to hold utensils together, and in crisscross decorations on human skulls. Collagen normally converts to gelatin, but survived due to dry conditions. Animal glues are thermoplastic, softening again upon reheating, so they are still used in making musical instruments such as fine violins and guitars, which may have to be reopened for repairs – an application incompatible with tough, synthetic plastic adhesives, which are permanent. Animal sinews and skins, including leather, have been used to make useful articles for millennia.
Gelatin-resorcinol-formaldehyde glue (and with formaldehyde replaced by less-toxic pentanedial and ethanedial) has been used to repair experimental incisions in rabbit lungs.
Cosmetics
Bovine collagen is widely used in dermal fillers for aesthetic correction of wrinkles and skin aging. Collagen cremes are also widely sold even though collagen cannot penetrate the skin because its fibers are too large. Collagen is a vital protein in skin, hair, nails, and other tissues. Its production decreases with age and factors like sun damage and smoking. Collagen supplements, derived from sources like fish and cattle, are marketed to improve skin, hair, and nails. Studies show some skin benefits, but these supplements often contain other beneficial ingredients, making it unclear if collagen alone is effective. There's minimal evidence supporting collagen's benefits for hair and nails. Overall, the effectiveness of oral collagen supplements is not well-proven, and focusing on a healthy lifestyle and proven skincare methods like sun protection is recommended.
History
The molecular and packing structures of collagen eluded scientists over decades of research. The first evidence that it possesses a regular structure at the molecular level was presented in the mid-1930s. Research then concentrated on the conformation of the collagen monomer, producing several competing models, although correctly dealing with the conformation of each individual peptide chain. The triple-helical "Madras" model, proposed by G. N. Ramachandran in 1955, provided an accurate model of quaternary structure in collagen. This model was supported by further studies of higher resolution in the late 20th century.
The packing structure of collagen has not been defined to the same degree outside of the fibrillar collagen types, although it has been long known to be hexagonal. As with its monomeric structure, several conflicting models propose either that the packing arrangement of collagen molecules is 'sheet-like', or is microfibrillar. The microfibrillar structure of collagen fibrils in tendon, cornea and cartilage was imaged directly by electron microscopy in the late 20th century and early 21st century. The microfibrillar structure of rat tail tendon was modeled as being closest to the observed structure, although it oversimplified the topological progression of neighboring collagen molecules, and so did not predict the correct conformation of the discontinuous D-periodic pentameric arrangement termed microfibril.
| Biology and health sciences | Proteins | Biology |
6061 | https://en.wikipedia.org/wiki/CNO%20cycle | CNO cycle | The CNO cycle (for carbon–nitrogen–oxygen; sometimes called Bethe–Weizsäcker cycle after Hans Albrecht Bethe and Carl Friedrich von Weizsäcker) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction (p–p cycle), which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun.
Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle (a stable helium nucleus), two positrons, and two electron neutrinos.
There are various alternative paths and catalysts involved in the CNO cycles, but all these cycles have the same net result:
4 + 2
→ + +
→ + +
The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in a repeating cycle.
The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately , but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately .
The Sun has a core temperature of around , and only of nuclei produced in the Sun are
born in the CNO cycle.
The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s.
The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020 by the BOREXINO collaboration. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct.
Cold CNO cycles
Under typical conditions found in stars, catalytic hydrogen burning by the CNO cycles is limited by proton captures. Specifically, the timescale for beta decay of the radioactive nuclei produced is faster than the timescale for fusion. Because of the long timescales involved, the cold CNO cycles convert hydrogen to helium slowly, allowing them to power stars in quiescent equilibrium for many years.
CNO-I
The first proposed catalytic cycle for the conversion of hydrogen into helium was initially called the carbon–nitrogen cycle (CN-cycle), also referred to as the Bethe–Weizsäcker cycle in honor of the independent work of Carl Friedrich von Weizsäcker in 1937–38 and Hans Bethe. Bethe's 1939 papers on the CN-cycle drew on three earlier papers written in collaboration with Robert Bacher and Milton Stanley Livingston and which came to be known informally as Bethe's Bible. It was considered the standard work on nuclear physics for many years and was a significant factor in his being awarded the 1967 Nobel Prize in Physics. Bethe's original calculations suggested the CN-cycle was the Sun's primary source of energy. This conclusion arose from a belief that is now known to be mistaken, that the abundance of nitrogen in the sun is approximately 10%; it is actually less than half a percent. The CN-cycle, named as it contains no stable isotope of oxygen, involves the following cycle of transformations:
→ → → → → →
This cycle is now understood as being the first part of a larger process, the CNO-cycle, and the main reactions in this part of the cycle (CNO-I) are:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ || ||(half-life of 9.965 minutes)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|}
where the carbon-12 nucleus used in the first reaction is regenerated in the last reaction. After the two positrons emitted annihilate with two ambient electrons producing an additional , the total energy released in one cycle is 26.73 MeV; in some texts, authors are erroneously including the positron annihilation energy in with the beta-decay Q-value and then neglecting the equal amount of energy released by annihilation, leading to possible confusion. All values are calculated with reference to the Atomic Mass Evaluation 2003.
The limiting (slowest) reaction in the CNO-I cycle is the proton capture on . In 2006 it was experimentally measured down to stellar energies, revising the calculated age of globular clusters by around 1 billion years.
The neutrinos emitted in beta decay will have a spectrum of energy ranges, because although momentum is conserved, the momentum can be shared in any way between the positron and neutrino, with either emitted at rest and the other taking away the full energy, or anything in between, so long as all the energy from the Q-value is used. The total momentum received by the positron and the neutrino is not great enough to cause a significant recoil of the much heavier daughter nucleus and hence, its contribution to kinetic energy of the products, for the precision of values given here, can be neglected. Thus the neutrino emitted during the decay of nitrogen-13 can have an energy from zero up to , and the neutrino emitted during the decay of oxygen-15 can have an energy from zero up to . On average, about 1.7 MeV of the total energy output is taken away by neutrinos for each loop of the cycle, leaving about available for producing luminosity.
CNO-II
In a minor branch of the above reaction, occurring in the Sun's core 0.04% of the time, the final reaction involving shown above does not produce carbon-12 and an alpha particle, but instead produces oxygen-16 and a photon and continues
In detail:
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 64.49 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|}
Like the carbon, nitrogen, and oxygen involved in the main branch, the fluorine produced in the minor branch is merely an intermediate product; at steady state, it does not accumulate in the star.
CNO-III
This subdominant branch is significant only for massive stars. The reactions are started when one of the reactions in CNO-II results in fluorine-18 and a photon instead of nitrogen-14 and an alpha particle, and continues
→ → → → → →
In detail:
{| border="0"
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || || || → || || + || || + || || + || || (half-life of )
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || || || → || || + || || + || || + || || (half-life of )
|}
CNO-IV
Like the CNO-III, this branch is also only significant in massive stars. The reactions are started when one of the reactions in CNO-III results in fluorine-19 and a photon instead of nitrogen-15 and an alpha particle, and continues
In detail:
{| border="0"
| || + || ||→ || || + || || || || + ||
|-
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 64.49 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 109.771 minutes)
|- style="height:2em;"
|}
In some instances can combine with a helium nucleus to start a sodium-neon cycle.
Hot CNO cycles
Under conditions of higher temperature and pressure, such as those found in novae and X-ray bursts, the rate of proton captures exceeds the rate of beta-decay, pushing the burning to the proton drip line. The essential idea is that a radioactive species will capture a proton before it can beta decay, opening new nuclear burning pathways that are otherwise inaccessible. Because of the higher temperatures involved, these catalytic cycles are typically referred to as the hot CNO cycles; because the timescales are limited by beta decays instead of proton captures, they are also called the beta-limited CNO cycles.
HCNO-I
The difference between the CNO-I cycle and the HCNO-I cycle is that captures a proton instead of decaying, leading to the total sequence
→→→→→→
In detail:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 70.641 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|}
HCNO-II
The notable difference between the CNO-II cycle and the HCNO-II cycle is that captures a proton instead of decaying, and neon is produced in a subsequent reaction on , leading to the total sequence
→→→→→→
In detail:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 1.672 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|}
HCNO-III
An alternative to the HCNO-II cycle is that captures a proton moving towards higher mass and using the same helium production mechanism as the CNO-IV cycle as
→→→→→→
In detail:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 17.22 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 1.672 seconds)
|}
Use in astronomy
While the total number of "catalytic" nuclei are conserved in the cycle, in stellar evolution the relative proportions of the nuclei are altered. When the cycle is run to equilibrium, the ratio of the carbon-12/carbon-13 nuclei is driven to 3.5, and nitrogen-14 becomes the most numerous nucleus, regardless of initial composition. During a star's evolution, convective mixing episodes moves material, within which the CNO cycle has operated, from the star's interior to the surface, altering the observed composition of the star. Red giant stars are observed to have lower carbon-12/carbon-13 and carbon-12/nitrogen-14 ratios than do main sequence stars, which is considered to be convincing evidence for the operation of the CNO cycle.
| Physical sciences | Stellar astronomy | Astronomy |
6085 | https://en.wikipedia.org/wiki/Cauchy%20sequence | Cauchy sequence | In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are less than that given distance from each other. Cauchy sequences are named after Augustin-Louis Cauchy; they may occasionally be known as fundamental sequences.
It is not sufficient for each term to become arbitrarily close to the term. For instance, in the sequence of square roots of natural numbers:
the consecutive terms become arbitrarily close to each other – their differences
tend to zero as the index grows. However, with growing values of , the terms become arbitrarily large. So, for any index and distance , there exists an index big enough such that As a result, no matter how far one goes, the remaining terms of the sequence never get close to ; hence the sequence is not Cauchy.
The utility of Cauchy sequences lies in the fact that in a complete metric space (one where all such sequences are known to converge to a limit), the criterion for convergence depends only on the terms of the sequence itself, as opposed to the definition of convergence, which uses the limit value as well as the terms. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates, thus fulfilling a logical condition, such as termination.
Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters and Cauchy nets.
In real numbers
A sequence
of real numbers is called a Cauchy sequence if for every positive real number there is a positive integer N such that for all natural numbers
where the vertical bars denote the absolute value. In a similar way one can define Cauchy sequences of rational or complex numbers. Cauchy formulated such a condition by requiring to be infinitesimal for every pair of infinite m, n.
For any real number r, the sequence of truncated decimal expansions of r forms a Cauchy sequence. For example, when this sequence is (3, 3.1, 3.14, 3.141, ...). The mth and nth terms differ by at most when m < n, and as m grows this becomes smaller than any fixed positive number
Modulus of Cauchy convergence
If is a sequence in the set then a modulus of Cauchy convergence for the sequence is a function from the set of natural numbers to itself, such that for all natural numbers and natural numbers
Any sequence with a modulus of Cauchy convergence is a Cauchy sequence. The existence of a modulus for a Cauchy sequence follows from the well-ordering property of the natural numbers (let be the smallest possible in the definition of Cauchy sequence, taking to be ). The existence of a modulus also follows from the principle of countable choice. Regular Cauchy sequences are sequences with a given modulus of Cauchy convergence (usually or ). Any Cauchy sequence with a modulus of Cauchy convergence is equivalent to a regular Cauchy sequence; this can be proven without using any form of the axiom of choice.
Moduli of Cauchy convergence are used by constructive mathematicians who do not wish to use any form of choice. Using a modulus of Cauchy convergence can simplify both definitions and theorems in constructive analysis. Regular Cauchy sequences were used by and by in constructive mathematics textbooks.
In a metric space
Since the definition of a Cauchy sequence only involves metric concepts, it is straightforward to generalize it to any metric space X.
To do so, the absolute value is replaced by the distance (where d denotes a metric) between and
Formally, given a metric space a sequence of elements of
is Cauchy, if for every positive real number there is a positive integer such that for all positive integers the distance
Roughly speaking, the terms of the sequence are getting closer and closer together in a way that suggests that the sequence ought to have a limit in X.
Nonetheless, such a limit does not always exist within X: the property of a space that every Cauchy sequence converges in the space is called completeness, and is detailed below.
Completeness
A metric space (X, d) in which every Cauchy sequence converges to an element of X is called complete.
Examples
The real numbers are complete under the metric induced by the usual absolute value, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. In this construction, each equivalence class of Cauchy sequences of rational numbers with a certain tail behavior—that is, each class of sequences that get arbitrarily close to one another— is a real number.
A rather different type of example is afforded by a metric space X which has the discrete metric (where any two distinct points are at distance 1 from each other). Any Cauchy sequence of elements of X must be constant beyond some fixed point, and converges to the eventually repeating term.
Non-example: rational numbers
The rational numbers are not complete (for the usual distance):
There are sequences of rationals that converge (in ) to irrational numbers; these are Cauchy sequences having no limit in In fact, if a real number x is irrational, then the sequence (xn), whose n-th term is the truncation to n decimal places of the decimal expansion of x, gives a Cauchy sequence of rational numbers with irrational limit x. Irrational numbers certainly exist in for example:
The sequence defined by consists of rational numbers (1, 3/2, 17/12,...), which is clear from the definition; however it converges to the irrational square root of 2, see Babylonian method of computing square root.
The sequence of ratios of consecutive Fibonacci numbers which, if it converges at all, converges to a limit satisfying and no rational number has this property. If one considers this as a sequence of real numbers, however, it converges to the real number the Golden ratio, which is irrational.
The values of the exponential, sine and cosine functions, exp(x), sin(x), cos(x), are known to be irrational for any rational value of but each can be defined as the limit of a rational Cauchy sequence, using, for instance, the Maclaurin series.
Non-example: open interval
The open interval in the set of real numbers with an ordinary distance in is not a complete space: there is a sequence in it, which is Cauchy (for arbitrarily small distance bound all terms of fit in the interval), however does not converge in — its 'limit', number 0, does not belong to the space
Other properties
Every convergent sequence (with limit s, say) is a Cauchy sequence, since, given any real number beyond some fixed point, every term of the sequence is within distance of s, so any two terms of the sequence are within distance of each other.
In any metric space, a Cauchy sequence is bounded (since for some N, all terms of the sequence from the N-th onwards are within distance 1 of each other, and if M is the largest distance between and any terms up to the N-th, then no term of the sequence has distance greater than from ).
In any metric space, a Cauchy sequence which has a convergent subsequence with limit s is itself convergent (with the same limit), since, given any real number r > 0, beyond some fixed point in the original sequence, every term of the subsequence is within distance r/2 of s, and any two terms of the original sequence are within distance r/2 of each other, so every term of the original sequence is within distance r of s.
These last two properties, together with the Bolzano–Weierstrass theorem, yield one standard proof of the completeness of the real numbers, closely related to both the Bolzano–Weierstrass theorem and the Heine–Borel theorem. Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological.
One of the standard illustrations of the advantage of being able to work with Cauchy sequences and make use of completeness is provided by consideration of the summation of an infinite series of real numbers
(or, more generally, of elements of any complete normed linear space, or Banach space). Such a series
is considered to be convergent if and only if the sequence of partial sums is convergent, where It is a routine matter to determine whether the sequence of partial sums is Cauchy or not, since for positive integers
If is a uniformly continuous map between the metric spaces M and N and (xn) is a Cauchy sequence in M, then is a Cauchy sequence in N. If and are two Cauchy sequences in the rational, real or complex numbers, then the sum and the product are also Cauchy sequences.
Generalizations
In topological vector spaces
There is also a concept of Cauchy sequence for a topological vector space : Pick a local base for about 0; then () is a Cauchy sequence if for each member there is some number such that whenever
is an element of If the topology of is compatible with a translation-invariant metric the two definitions agree.
In topological groups
Since the topological vector space definition of Cauchy sequence requires only that there be a continuous "subtraction" operation, it can just as well be stated in the context of a topological group: A sequence in a topological group is a Cauchy sequence if for every open neighbourhood of the identity in there exists some number such that whenever it follows that As above, it is sufficient to check this for the neighbourhoods in any local base of the identity in
As in the construction of the completion of a metric space, one can furthermore define the binary relation on Cauchy sequences in that and are equivalent if for every open neighbourhood of the identity in there exists some number such that whenever it follows that This relation is an equivalence relation: It is reflexive since the sequences are Cauchy sequences. It is symmetric since which by continuity of the inverse is another open neighbourhood of the identity. It is transitive since where and are open neighbourhoods of the identity such that ; such pairs exist by the continuity of the group operation.
In groups
There is also a concept of Cauchy sequence in a group :
Let be a decreasing sequence of normal subgroups of of finite index.
Then a sequence in is said to be Cauchy (with respect to ) if and only if for any there is such that for all
Technically, this is the same thing as a topological group Cauchy sequence for a particular choice of topology on namely that for which is a local base.
The set of such Cauchy sequences forms a group (for the componentwise product), and the set of null sequences (sequences such that ) is a normal subgroup of The factor group is called the completion of with respect to
One can then show that this completion is isomorphic to the inverse limit of the sequence
An example of this construction familiar in number theory and algebraic geometry is the construction of the -adic completion of the integers with respect to a prime In this case, is the integers under addition, and is the additive subgroup consisting of integer multiples of
If is a cofinal sequence (that is, any normal subgroup of finite index contains some ), then this completion is canonical in the sense that it is isomorphic to the inverse limit of where varies over normal subgroups of finite index. For further details, see Ch. I.10 in Lang's "Algebra".
In a hyperreal continuum
A real sequence has a natural hyperreal extension, defined for hypernatural values H of the index n in addition to the usual natural n. The sequence is Cauchy if and only if for every infinite H and K, the values and are infinitely close, or adequal, that is,
where "st" is the standard part function.
Cauchy completion of categories
introduced a notion of Cauchy completion of a category. Applied to (the category whose objects are rational numbers, and there is a morphism from x to y if and only if ), this Cauchy completion yields (again interpreted as a category using its natural ordering).
| Mathematics | Sequences and series | null |
6088 | https://en.wikipedia.org/wiki/Common%20Era | Common Era | Common Era (CE) and Before the Common Era (BCE) are year notations for the Gregorian calendar (and its predecessor, the Julian calendar), the world's most widely used calendar era. Common Era and Before the Common Era are alternatives to the original Anno Domini (AD) and Before Christ (BC) notations used for the same calendar era. The two notation systems are numerically equivalent: " CE" and "AD " each describe the current year; "400 BCE" and "400 BC" are the same year.
The expression can be traced back to 1615, when it first appears in a book by Johannes Kepler as the (), and to 1635 in English as "Vulgar Era". The term "Common Era" can be found in English as early as 1708, and became more widely used in the mid-19th century by Jewish religious scholars. Since the late 20th century, BCE and CE have become popular in academic and scientific publications on the grounds that BCE and CE are religiously neutral terms. They have been promoted as more sensitive to non-Christians by not referring to Jesus, the central figure of Christianity, especially via the religious terms "Christ" and ("Lord") used by the other abbreviations. Nevertheless, its epoch remains the same as that used for the Anno Domini era.
History
Origins
The idea of numbering years beginning from the date that he believed to be the date of birth of Jesus, was conceived around the year 525 by the Christian monk Dionysius Exiguus. He did this to replace the then dominant Era of Martyrs system, because he did not wish to continue the memory of a tyrant who persecuted Christians. He numbered years from an initial reference date ("epoch"), an event he referred to as the Incarnation of Jesus. Dionysius labeled the column of the table in which he introduced the new era as "Anni Domini Nostri Jesu Christi" (Of the year of our Lord Jesus Christ].
This way of numbering years became more widespread in Europe with its use by Bede in England in 731. Bede also introduced the practice of dating years before what he supposed was the year of birth of Jesus, without a year zero. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius.
Vulgar Era
The term "Common Era" is traced back in English to its appearance as "Vulgar Era" to distinguish years of the Anno Domini era, which was in popular use, from dates of the regnal year (the year of the reign of a sovereign) typically used in national law.
(The word 'vulgar' originally meant 'of the ordinary people', with no derogatory associations.)
The first use of the Latin term may be that in a 1615 book by Johannes Kepler. Kepler uses it again, as , in a 1616 table of ephemerides, and again, as , in 1617. A 1635 English edition of that book has the title page in English that may be the earliest-found use of Vulgar Era in English. A 1701 book edited by John Le Clerc includes the phrase "Before Christ according to the Vulgar Æra,6".
The Merriam Webster Dictionary gives 1716 as the date of first use of the term "vulgar era" (which it defines as Christian era).
The first published use of "Christian Era" may be the Latin phrase on the title page of a 1584 theology book, . In 1649, the Latin phrase appeared in the title of an English almanac. A 1652 ephemeris may be the first instance found so far of the English use of "Christian Era".
The English phrase "Common Era" appears at least as early as 1708,
and in a 1715 book on astronomy it is used interchangeably with "Christian Era" and "Vulgar Era". A 1759 history book uses common æra in a generic sense, to refer to "the common era of the Jews". The first use of the phrase "before the common era" may be that in a 1770 work that also uses common era and vulgar era as synonyms, in a translation of a book originally written in German. The 1797 edition of the Encyclopædia Britannica uses the terms vulgar era and common era synonymously.
In 1835, in his book Living Oracles, Alexander Campbell, wrote: "The vulgar Era, or Anno Domini; the fourth year of Jesus Christ, the first of which was but eight days",
and also refers to the common era as a synonym for vulgar era with "the fact that our Lord was born on the 4th year before the vulgar era, called Anno Domini, thus making (for example) the 42d year from his birth to correspond with the 38th of the common era". The Catholic Encyclopedia (1909) in at least one article reports all three terms (Christian, Vulgar, Common Era) being commonly understood by the early 20th century.
The phrase "common era", in lower case, also appeared in the 19th century in a "generic" sense, not necessarily to refer to the Christian Era, but to any system of dates in common use throughout a civilization. Thus, "the common era of the Jews", "the common era of the Mahometans", "common era of the world", "the common era of the foundation of Rome".
When it did refer to the Christian Era, it was sometimes qualified, e.g., "common era of the Incarnation", "common era of the Nativity", or "common era of the birth of Christ".
An adapted translation of Common Era into Latin as was adopted in the 20th century by some followers of Aleister Crowley, and thus the abbreviation "e.v." or "EV" may sometimes be seen as a replacement for AD.
History of the use of the CE/BCE abbreviation
Although Jews have their own Hebrew calendar, they often use the Gregorian calendar without the AD prefix. As early as 1825, the abbreviation VE (for Vulgar Era) was in use among Jews to denote years in the Western calendar. , Common Era notation has also been in use for Hebrew lessons for more than a century. Jews have also used the term Current Era.
Contemporary usage
Some academics in the fields of theology, education, archaeology and history have adopted CE and BCE notation despite some disagreement. A study conducted in 2014 found that the BCE/CE notation is not growing at the expense of BC and AD notation in the scholarly literature, and that both notations are used in a relatively stable fashion.
Australia
In 2011, media reports suggested that the BC/AD notation in Australian school textbooks would be replaced by BCE/CE notation. The change drew opposition from some politicians and church leaders. Weeks after the story broke, the Australian Curriculum, Assessment and Reporting Authority denied the rumours and stated that the BC/AD notation would remain, with CE and BCE as an optional suggested learning activity.
Canada
In 2013, the Canadian Museum of Civilization (now the Canadian Museum of History) in Gatineau (opposite Ottawa), which had previously switched to BCE/CE, decided to change back to BC/AD in material intended for the public while retaining BCE/CE in academic content.
Nepal
The notation is in particularly common use in Nepal in order to disambiguate dates from the local calendar, Bikram or Vikram Sambat. Disambiguation is needed because the era of the local calendar is quite close to the Common Era.
United Kingdom
In 2002, an advisory panel for the religious education syllabus for England and Wales recommended introducing BCE/CE dates to schools, and by 2018 some local education authorities were using them.
In 2018, the National Trust said it would continue to use BC/AD as its house style. English Heritage explains its era policy thus: "It might seem strange to use a Christian calendar system when referring to British prehistory, but the BC/AD labels are widely used and understood." Some parts of the BBC use BCE/CE, but some presenters have said they will not. As of October 2019, the BBC News style guide has entries for AD and BC, but not for CE or BCE. The style guide for The Guardian says, under the entry for CE/BCE: "some people prefer CE (common era, current era, or Christian era) and BCE (before common era, etc.) to AD and BC, which, however, remain our style".
United States
In the United States, the use of the BCE/CE notation in textbooks was reported in 2005 to be growing. Some publications have transitioned to using it exclusively. For example, the 2007 World Almanac was the first edition to switch to BCE/CE, ending a period of 138 years in which the traditional BC/AD dating notation was used. BCE/CE is used by the College Board in its history tests, and by the Norton Anthology of English Literature. Others have taken a different approach. The US-based History Channel uses BCE/CE notation in articles on non-Christian religious topics such as Jerusalem and Judaism. The 2006 style guide for the Episcopal Diocese Maryland Church News says that BCE and CE should be used.
In June 2006, in the United States, the Kentucky State School Board reversed its decision to use BCE and CE in the state's new Program of Studies, leaving education of students about these concepts a matter of local discretion.
Rationales
Support
The use of CE in Jewish scholarship was historically motivated by the desire to avoid the implicit "Our Lord" in the abbreviation AD. Although other aspects of dating systems are based in Christian origins, AD is a direct reference to Jesus as Lord. Proponents of the Common Era notation assert that the use of BCE/CE shows sensitivity to those who use the same year numbering system as the one that originated with and is currently used by Christians, but who are not themselves Christian. Former United Nations Secretary-General Kofi Annan has argued:
Adena K. Berkowitz, in her application to argue before the United States Supreme Court, opted to use BCE and CE because, "Given the multicultural society that we live in, the traditional Jewish designationsB.C.E. and C.E. cast a wider net of inclusion." In the World History Encyclopedia, Joshua J. Mark wrote "Non-Christian scholars, especially, embraced [CE and BCE] because they could now communicate more easily with the Christian community. Jewish, Islamic, Hindu and Buddhist scholars could retain their [own] calendar but refer to events using the Gregorian Calendar as BCE and CE without compromising their own beliefs about the divinity of Jesus of Nazareth." In History Today, Michael Ostling wrote: "BC/AD Dating: In the year of whose Lord? The continuing use of AD and BC is not only factually wrong but also offensive to many who are not Christians."
Opposition
Critics note the fact that there is no difference in the epoch of the two systems—chosen to be close to the date of birth of Jesus. Since the year numbers are the same, BCE and CE dates should be equally offensive to other religions as BC and AD. Roman Catholic priest and writer on interfaith issues Raimon Panikkar argued that the BCE/CE usage is the less inclusive option since they are still using the Christian calendar numbers and forcing it on other nations. In 1993, the English-language expert Kenneth G. Wilson speculated a slippery slope scenario in his style guide that, "if we do end by casting aside the AD/BC convention, almost certainly some will argue that we ought to cast aside as well the conventional numbering system [that is, the method of numbering years] itself, given its Christian basis."
Some Christians are offended by the removal of the reference to Jesus, including the Southern Baptist Convention.
Conventions in style guides
The abbreviation BCE, just as with BC, always follows the year number. Unlike AD, which still often precedes the year number, CE always follows the year number (if context requires that it be written at all). Thus, the current year is written as in both notations (or, if further clarity is needed, as CE, or as AD ), and the year that Socrates died is represented as 399 BCE (the same year that is represented by 399 BC in the BC/AD notation). The abbreviations are sometimes written with small capital letters, or with periods (e.g., "B.C.E." or "C.E."). The US-based Society of Biblical Literature style guide for academic texts on religion prefers BCE/CE to BC/AD.
Similar conventions in other languages
In Germany, Jews in Berlin seem to have already been using words translating to "(before the) common era" in the 18th century, while others like Moses Mendelssohn opposed this usage as it would hinder the integration of Jews into German society. The formulation seems to have persisted among German Jews in the 19th century in forms like (before the common chronology). In 1938 Nazi Germany, the use of this convention was also prescribed by the National Socialist Teachers League. However, it was soon discovered that many German Jews had been using the convention ever since the 18th century, and Time magazine found it ironic to see "Aryans following Jewish example nearly 200 years later".
In Spanish, common forms used for "BC" are and (for "", "before Christ"), with variations in punctuation and sometimes the use of () instead of . The also acknowledges the use of () and (). In scholarly writing, is the equivalent of the English "BCE", "" or "Before the Common Era".
In Welsh, OC can be expanded to equivalents of both AD () and CE (); for dates before the Common Era, CC (traditionally, ) is used exclusively, as would abbreviate to a mild obscenity.
In Russian since the October Revolution (1917) , lit. before our era) and lit. of our era) are used almost universally. Within Christian churches , i.e. before/after the birth of Christ, equivalent to ) remains in use.
In Polish, "p.n.e." (, lit. before our era) and "n.e." (, lit. of our era) are commonly used in historical and scientific literature. (before Christ) and (after Christ) see sporadic usage, mostly in religious publications.
In China, upon the foundation of the Republic of China, the Government in Nanking adopted the Republic of China calendar with 1912 designated as year 1, but used the Western calendar for international purposes. The translated term was (, "Western Era"), which is still used in Taiwan in formal documents. In 1949, the People's Republic of China adopted (, "Common Era") for both internal and external affairs in mainland China. This notation was extended to Hong Kong in 1997 and Macau in 1999 (de facto extended in 1966) through Annex III of Hong Kong Basic Law and Macau Basic Law, thus eliminating the ROC calendar in these areas. BCE is translated into Chinese as (, "Before the Common Era").
In Czech, the "n. l." ( which translates as of our year count) and "př. n. l." or "před n. l." ( meaning before our year count) is used, always after the year number. The direct translation of AD (, abbreviated as L. P.) or BC (, abbreviated as př. Kr.) is seen as archaic.
In Croatian the common form used for BC and AD are pr. Kr. (prije Krista, "before Christ") and p. Kr. (poslije Krista, after Christ). The abbreviations pr. n. e. (prije nove ere, before new era) and n. e. (nove ere, (of the) new era) have also recently been introduced.
In Danish, "f.v.t." (, before our time reckoning) and "e.v.t." (, after our time reckoning) are used as BCE/CE are in English. Also commonly used are "f.Kr." (, before Christ) and "e.Kr." (, after Christ), which are both placed after the year number in contrast with BC/AD in English.
In Macedonian, the terms "п.н.е." (пред нашата ера "before our era") and "н.е." (наша ера "our era") are used in every aspect.
In Estonian, "e.m.a." (, before our time reckoning) and "m.a.j." (, according to our time reckoning) are used as BCE and CE, respectively. Also in use are terms "eKr" (, before Christ) and "pKr" (, after Christ). In all cases, the abbreviation is written after the year number.
In Finnish, "eaa." (, before time reckoning) and "jaa." (, after the start of time reckoning) are used as BCE and CE, respectively. Also (decreasingly) in use are terms "eKr", (, before Christ) and "jKr". (, after Christ). In all cases, the abbreviation is written after the year number.
| Technology | Calendars | null |
6099 | https://en.wikipedia.org/wiki/Carboxylic%20acid | Carboxylic acid | In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group () attached to an R-group. The general formula of a carboxylic acid is often written as or , sometimes as with R referring to an organyl group (e.g., alkyl, alkenyl, aryl), or hydrogen, or other groups. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion.
Examples and nomenclature
Carboxylic acids are commonly identified by their trivial names. They often have the suffix -ic acid. IUPAC-recommended names also exist; in this system, carboxylic acids have an -oic acid suffix. For example, butyric acid () is butanoic acid by IUPAC guidelines. For nomenclature of complex molecules containing a carboxylic acid, the carboxyl can be considered position one of the parent chain even if there are other substituents, such as 3-chloropropanoic acid. Alternately, it can be named as a "carboxy" or "carboxylic acid" substituent on another parent structure, such as 2-carboxyfuran.
The carboxylate anion ( or ) of a carboxylic acid is usually named with the suffix -ate, in keeping with the general pattern of -ic acid and -ate for a conjugate acid and its conjugate base, respectively. For example, the conjugate base of acetic acid is acetate.
Carbonic acid, which occurs in bicarbonate buffer systems in nature, is not generally classed as one of the carboxylic acids, despite that it has a moiety that looks like a COOH group.
Physical properties
Solubility
Carboxylic acids are polar. Because they are both hydrogen-bond acceptors (the carbonyl ) and hydrogen-bond donors (the hydroxyl ), they also participate in hydrogen bonding. Together, the hydroxyl and carbonyl group form the functional group carboxyl. Carboxylic acids usually exist as dimers in nonpolar media due to their tendency to "self-associate". Smaller carboxylic acids (1 to 5 carbons) are soluble in water, whereas bigger carboxylic acids have limited solubility due to the increasing hydrophobic nature of the alkyl chain. These longer chain acids tend to be soluble in less-polar solvents such as ethers and alcohols. Aqueous sodium hydroxide and carboxylic acids, even hydrophobic ones, react to yield water-soluble sodium salts. For example, enanthic acid has a low solubility in water (0.2 g/L), but its sodium salt is very soluble in water.
Boiling points
Carboxylic acids tend to have higher boiling points than water, because of their greater surface areas and their tendency to form stabilized dimers through hydrogen bonds. For boiling to occur, either the dimer bonds must be broken or the entire dimer arrangement must be vaporized, increasing the enthalpy of vaporization requirements significantly.
Acidity
Carboxylic acids are Brønsted–Lowry acids because they are proton (H+) donors. They are the most common type of organic acid.
Carboxylic acids are typically weak acids, meaning that they only partially dissociate into cations and anions in neutral aqueous solution. For example, at room temperature, in a 1-molar solution of acetic acid, only 0.001% of the acid are dissociated (i.e. 10−5 moles out of 1 mol). Electron-withdrawing substituents, such as -CF3 group, give stronger acids (the pKa of acetic acid is 4.76 whereas trifluoroacetic acid, with a trifluoromethyl substituent, has a pKa of 0.23). Electron-donating substituents give weaker acids (the pKa of formic acid is 3.75 whereas acetic acid, with a methyl substituent, has a pKa of 4.76)
Deprotonation of carboxylic acids gives carboxylate anions; these are resonance stabilized, because the negative charge is delocalized over the two oxygen atoms, increasing the stability of the anion. Each of the carbon–oxygen bonds in the carboxylate anion has a partial double-bond character. The carbonyl carbon's partial positive charge is also weakened by the -1/2 negative charges on the 2 oxygen atoms.
Odour
Carboxylic acids often have strong sour odours. Esters of carboxylic acids tend to have fruity, pleasant odours, and many are used in perfume.
Characterization
Carboxylic acids are readily identified as such by infrared spectroscopy. They exhibit a sharp band associated with vibration of the C=O carbonyl bond (νC=O) between 1680 and 1725 cm−1. A characteristic νO–H band appears as a broad peak in the 2500 to 3000 cm−1 region. By 1H NMR spectrometry, the hydroxyl hydrogen appears in the 10–13 ppm region, although it is often either broadened or not observed owing to exchange with traces of water.
Occurrence and applications
Many carboxylic acids are produced industrially on a large scale. They are also frequently found in nature. Esters of fatty acids are the main components of lipids and polyamides of aminocarboxylic acids are the main components of proteins.
Carboxylic acids are used in the production of polymers, pharmaceuticals, solvents, and food additives. Industrially important carboxylic acids include acetic acid (component of vinegar, precursor to solvents and coatings), acrylic and methacrylic acids (precursors to polymers, adhesives), adipic acid (polymers), citric acid (a flavor and preservative in food and beverages), ethylenediaminetetraacetic acid (chelating agent), fatty acids (coatings), maleic acid (polymers), propionic acid (food preservative), terephthalic acid (polymers). Important carboxylate salts are soaps.
Synthesis
Industrial routes
In general, industrial routes to carboxylic acids differ from those used on a smaller scale because they require specialized equipment.
Carbonylation of alcohols as illustrated by the Cativa process for the production of acetic acid. Formic acid is prepared by a different carbonylation pathway, also starting from methanol.
Oxidation of aldehydes with air using cobalt and manganese catalysts. The required aldehydes are readily obtained from alkenes by hydroformylation.
Oxidation of hydrocarbons using air. For simple alkanes, this method is inexpensive but not selective enough to be useful. Allylic and benzylic compounds undergo more selective oxidations. Alkyl groups on a benzene ring are oxidized to the carboxylic acid, regardless of its chain length. Benzoic acid from toluene, terephthalic acid from para-xylene, and phthalic acid from ortho-xylene are illustrative large-scale conversions. Acrylic acid is generated from propene.
Oxidation of ethene using silicotungstic acid catalyst.
Base-catalyzed dehydrogenation of alcohols.
Carbonylation coupled to the addition of water. This method is effective and versatile for alkenes that generate secondary and tertiary carbocations, e.g. isobutylene to pivalic acid. In the Koch reaction, the addition of water and carbon monoxide to alkenes or alkynes is catalyzed by strong acids. Hydrocarboxylations involve the simultaneous addition of water and CO. Such reactions are sometimes called "Reppe chemistry."
Hydrolysis of triglycerides obtained from plant or animal oils. These methods of synthesizing some long-chain carboxylic acids are related to soap making.
Fermentation of ethanol. This method is used in the production of vinegar.
The Kolbe–Schmitt reaction provides a route to salicylic acid, precursor to aspirin.
Laboratory methods
Preparative methods for small scale reactions for research or for production of fine chemicals often employ expensive consumable reagents.
Oxidation of primary alcohols or aldehydes with strong oxidants such as potassium dichromate, Jones reagent, potassium permanganate, or sodium chlorite. The method is more suitable for laboratory conditions than the industrial use of air, which is "greener" because it yields less inorganic side products such as chromium or manganese oxides.
Oxidative cleavage of olefins by ozonolysis, potassium permanganate, or potassium dichromate.
Hydrolysis of nitriles, esters, or amides, usually with acid- or base-catalysis.
Carbonation of a Grignard reagent and organolithium reagents:
Halogenation followed by hydrolysis of methyl ketones in the haloform reaction
Base-catalyzed cleavage of non-enolizable ketones, especially aryl ketones:
Less-common reactions
Many reactions produce carboxylic acids but are used only in specific cases or are mainly of academic interest.
Disproportionation of an aldehyde in the Cannizzaro reaction
Rearrangement of diketones in the benzilic acid rearrangement
Involving the generation of benzoic acids are the von Richter reaction from nitrobenzenes and the Kolbe–Schmitt reaction from phenols.
Reactions
Acid-base reactions
Carboxylic acids react with bases to form carboxylate salts, in which the hydrogen of the hydroxyl (–OH) group is replaced with a metal cation. For example, acetic acid found in vinegar reacts with sodium bicarbonate (baking soda) to form sodium acetate, carbon dioxide, and water:
Conversion to esters, amides, anhydrides
Widely practiced reactions convert carboxylic acids into esters, amides, carboxylate salts, acid chlorides, and alcohols.
Their conversion to esters is widely used, e.g. in the production of polyesters. Likewise, carboxylic acids are converted into amides, but this conversion typically does not occur by direct reaction of the carboxylic acid and the amine. Instead esters are typical precursors to amides. The conversion of amino acids into peptides is a significant biochemical process that requires ATP.
Converting a carboxylic acid to an amide is possible, but not straightforward. Instead of acting as a nucleophile, an amine will react as a base in the presence of a carboxylic acid to give the ammonium carboxylate salt. Heating the salt to above 100 °C will drive off water and lead to the formation of the amide. This method of synthesizing amides is industrially important, and has laboratory applications as well. In the presence of a strong acid catalyst, carboxylic acids can condense to form acid anhydrides. The condensation produces water, however, which can hydrolyze the anhydride back to the starting carboxylic acids. Thus, the formation of the anhydride via condensation is an equilibrium process.
Under acid-catalyzed conditions, carboxylic acids will react with alcohols to form esters via the Fischer esterification reaction, which is also an equilibrium process. Alternatively, diazomethane can be used to convert an acid to an ester. While esterification reactions with diazomethane often give quantitative yields, diazomethane is only useful for forming methyl esters.
Reduction
Like esters, most carboxylic acids can be reduced to alcohols by hydrogenation, or using hydride transferring agents such as lithium aluminium hydride. Strong alkyl transferring agents, such as organolithium compounds but not Grignard reagents, will reduce carboxylic acids to ketones along with transfer of the alkyl group.
The Vilsmaier reagent (N,N-Dimethyl(chloromethylene)ammonium chloride; ) is a highly chemoselective agent for carboxylic acid reduction. It selectively activates the carboxylic acid to give the carboxymethyleneammonium salt, which can be reduced by a mild reductant like lithium tris(t-butoxy)aluminum hydride to afford an aldehyde in a one pot procedure. This procedure is known to tolerate reactive carbonyl functionalities such as ketone as well as moderately reactive ester, olefin, nitrile, and halide moieties.
Conversion to acyl halides
The hydroxyl group on carboxylic acids may be replaced with a chlorine atom using thionyl chloride to give acyl chlorides. In nature, carboxylic acids are converted to thioesters. Thionyl chloride can be used to convert carboxylic acids to their corresponding acyl chlorides. First, carboxylic acid 1 attacks thionyl chloride, and chloride ion leaves. The resulting oxonium ion 2 is activated towards nucleophilic attack and has a good leaving group, setting it apart from a normal carboxylic acid. In the next step, 2 is attacked by chloride ion to give tetrahedral intermediate 3, a chlorosulfite. The tetrahedral intermediate collapses with the loss of sulfur dioxide and chloride ion, giving protonated acyl chloride 4. Chloride ion can remove the proton on the carbonyl group, giving the acyl chloride 5 with a loss of HCl.
Phosphorus(III) chloride (PCl3) and phosphorus(V) chloride (PCl5) will also convert carboxylic acids to acid chlorides, by a similar mechanism. One equivalent of PCl3 can react with three equivalents of acid, producing one equivalent of H3PO3, or phosphorus acid, in addition to the desired acid chloride. PCl5 reacts with carboxylic acids in a 1:1 ratio, and produces phosphorus(V) oxychloride (POCl3) and hydrogen chloride (HCl) as byproducts.
Reactions with carbanion equivalents
Carboxylic acids react with Grignard reagents and organolithiums to form ketones. The first equivalent of nucleophile acts as a base and deprotonates the acid. A second equivalent will attack the carbonyl group to create a geminal alkoxide dianion, which is protonated upon workup to give the hydrate of a ketone. Because most ketone hydrates are unstable relative to their corresponding ketones, the equilibrium between the two is shifted heavily in favor of the ketone. For example, the equilibrium constant for the formation of acetone hydrate from acetone is only 0.002. The carboxylic group is the most acidic in organic compounds.
Specialized reactions
As with all carbonyl compounds, the protons on the α-carbon are labile due to keto–enol tautomerization. Thus, the α-carbon is easily halogenated in the Hell–Volhard–Zelinsky halogenation.
The Schmidt reaction converts carboxylic acids to amines.
Carboxylic acids are decarboxylated in the Hunsdiecker reaction.
The Dakin–West reaction converts an amino acid to the corresponding amino ketone.
In the Barbier–Wieland degradation, a carboxylic acid on an aliphatic chain having a simple methylene bridge at the alpha position can have the chain shortened by one carbon. The inverse procedure is the Arndt–Eistert synthesis, where an acid is converted into acyl halide, which is then reacted with diazomethane to give one additional methylene in the aliphatic chain.
Many acids undergo oxidative decarboxylation. Enzymes that catalyze these reactions are known as carboxylases (EC 6.4.1) and decarboxylases (EC 4.1.1).
Carboxylic acids are reduced to aldehydes via the ester and DIBAL, via the acid chloride in the Rosenmund reduction and via the thioester in the Fukuyama reduction.
In ketonic decarboxylation carboxylic acids are converted to ketones.
Organolithium reagents (>2 equiv) react with carboxylic acids to give a dilithium 1,1-diolate, a stable tetrahedral intermediate which decomposes to give a ketone upon acidic workup.
The Kolbe electrolysis is an electrolytic, decarboxylative dimerization reaction. It gets rid of the carboxyl groups of two acid molecules, and joins the remaining fragments together.
Carboxyl radical
The carboxyl radical, •COOH, only exists briefly. The acid dissociation constant of •COOH has been measured using electron paramagnetic resonance spectroscopy. The carboxyl group tends to dimerise to form oxalic acid.
| Physical sciences | Carbon–oxygen bond | null |
6102 | https://en.wikipedia.org/wiki/Cyan | Cyan | Cyan () is the color between blue and green on the visible spectrum of light. It is evoked by light with a predominant wavelength between 500 and 520 nm, between the wavelengths of green and blue.
In the subtractive color system, or CMYK color model, which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta and yellow. In the additive color system, or RGB color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and blue light. Cyan is the complement of red; it can be made by the removal of red from white. Mixing red light and cyan light at the right intensity will make white light. It is commonly seen on a bright, sunny day in the sky.
Shades and variations
Different shades of cyan can vary in terms of hue, chroma (also known as saturation, intensity, or colorfulness), or lightness (or value, tone, or brightness), or any combination of these characteristics. Differences in value can also be referred to as tints and shades, with a tint being a cyan mixed with white, and a shade being mixed with black.
Color nomenclature is subjective. Many shades of cyan with a bluish hue are called blue. Similarly, those with a greenish hue are referred to as green. A cyan with a dark shade is commonly known as teal. A teal blue shade leans toward the blue end of the spectrum. Variations of teal with a greener tint are commonly referred to as teal green.
Turquoise, reminiscent of the stone with the same name, is a shade in the green spectrum of cyan hues. Celeste is a lightly tinted cyan that represents the color of a clear sky. Other colors in the cyan color range are electric blue, aquamarine, and others described as blue-green.
History
Cyan boasts a rich and diverse history, holding cultural significance for millennia. In ancient civilizations, turquoise, valued for its aesthetic appeal, served as a highly regarded precious gem. Turquoise comes in a variety of shades from green to blue, but cyan hues are particularly prevalent. Approximately 3,700 years ago, an intricately crafted dragon-shaped treasure made from over 2,000 pieces of turquoise and jade was created. This artifact is widely recognized as the oldest Chinese dragon totem by many Chinese scholars.
Turquoise jewelry also held significant importance among the Aztecs, who often featured this precious gemstone in vibrant frescoes for both symbolic and decorative purposes. The Aztecs revered turquoise, associating its color with the heavens and sacredness. Additionally, ancient Egyptians interpreted cyan hues as representing faith and truth, while Tibetans viewed them as a symbol of infinity.
After earlier uses in various contexts, cyan hues found increased use in diverse cultures due to their appealing aesthetic qualities in religious structures and art pieces. For example, the prominent dome of the Goharshad Mosque in Iran, built in 1418, showcases this trend. Additionally, Jacopo da Pontormo's use of a teal shade for Mary's robe in the 1528 painting Carmignano Visitation demonstrates the allure for these hues. During the 16th century, speakers of the English language began using the term turquoise to describe the cyan color of objects that resembled the color of the stone.
In the 1870s, the French sculptor Frédéric Bartholdi began the construction of what would later become the Statue of Liberty. Over time, exposure to the elements caused the copper structure to develop its distinctive patina, now recognized as iconic cyan. Following this, there was a significant advancement in the use of cyan during the late 19th and early 20th centuries.
Impressionist artists, such as Claude Monet in his renowned Water Lilies, effectively incorporated cyan hues into their works. Deviating from traditional interpretations of local color under neutral lighting conditions, the focus of artists was on accurately depicting perceived color and the influence of light on altering object hues. Specifically, daylight plays a significant role in shifting the perceived color of objects toward cyan hues. In 1917, the color term teal was introduced to describe deeper shades of cyan.
In the late 19th century, while traditional nomenclature of red, yellow, and blue persisted, the printing industry initiated a shift towards utilizing magenta and cyan inks for red and blue hues, respectively. This transition aimed to establish a more versatile color gamut achievable with only three primary colors. In 1949, a document in the printing industry stated: “The four-color set comprises Yellow, Red (magenta), Blue (cyan), Black”. This practice of labeling magenta, yellow, and cyan as red, yellow, and blue persisted until 1961. As the hues evolved, the printing industry maintained the use of the traditional terms red, yellow, and blue. Consequently, pinpointing the exact date of origin for CMYK, in which cyan serves as a primary color, proves challenging.
In August 1991, the HP Deskwriter 500C became the first Deskwriter to offer color printing as an option. It used interchangeable black and color (cyan, magenta, and yellow) inkjet print cartridges. With the inclusion of cyan ink in printers, the term "cyan" has become widely recognized in both home and office settings. According to TUP/Technology User Profile 2020, approximately 70% of online American adults regularly use a home printer.
Etymology and terminology
Its name is derived from the Ancient Greek word kyanos (κύανος), meaning "dark blue enamel, Lapis lazuli". It was formerly known as "cyan blue" or cyan-blue, and its first recorded use as a color name in English was in 1879. Further origins of the color name can be traced back to a dye produced from the cornflower (Centaurea cyanus).
In most languages, 'cyan' is not a basic color term and it phenomenologically appears as a greenish vibrant hue of blue to most English speakers. Other English terms for this "borderline" hue region include blue green, aqua, turquoise, teal, and grue.
On the web and in printing
Web colors cyan and aqua
The web color cyan shown at right is a secondary color in the RGB color model, which uses combinations of red, green and blue light to create all the colors on computer and television displays. In X11 colors, this color is called both cyan and aqua. In the HTML color list, this same color is called aqua.
The web colors are more vivid than the cyan used in the CMYK color system, and the web colors cannot be accurately reproduced on a printed page. To reproduce the web color cyan in inks, it is necessary to add some white ink to the printer's cyan below, so when it is reproduced in printing, it is not a primary subtractive color. It is called aqua (a name in use since 1598) because it is a color commonly associated with water, such as the appearance of the water at a tropical beach.
Process cyan
Cyan is also one of the common inks used in four-color printing, along with magenta, yellow, and black; this set of colors is referred to as CMYK. In printing, the cyan ink is sometimes known as printer's cyan, process cyan, or process blue.
While both the additive secondary and the subtractive primary are called cyan, they can be substantially different from one another. Cyan printing ink is typically more saturated than the RGB secondary cyan, depending on what RGB color space and ink are considered. That is, process cyan is usually outside the RGB gamut, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there can be variations in the printed color that is pure cyan ink. This is because real-world subtractive (unlike additive) color mixing does not consistently produce the same result when mixing apparently identical colors, since the specific frequencies filtered out to produce that color affect how it interacts with other colors. Phthalocyanine blue is one such commonly used pigment. A typical formulation of process cyan is shown in the color box on the right.
In science and nature
Color of water
Pure water is nearly colorless. However, it does absorb slightly more red light than blue, giving significant volumes of water a bluish tint; increased scattering of blue light due to fine particles in the water shifts the blue color toward green, for a typically cyan net color.
Cyan and cyanide
Cyanide derives its name from Prussian blue, a blue pigment containing the cyanide ion.
Bacteria
Cyanobacteria (sometimes called blue-green algae) are an important link in the food chain.
Astronomy
The planet Uranus is colored cyan because of the abundance of methane in its atmosphere. Methane absorbs red light and reflects the blue-green light which allows observers to see it as cyan.
Energy
Natural gas (methane), used by many for home cooking on gas stoves, has a cyan colored flame when burned with a mixture of air.
Photography and film
Cyanotype, or blueprint, a monochrome photographic printing process that predates the use of the word cyan as a color, yields a deep cyan-blue colored print based on the Prussian blue pigment.
Cinecolor, a bi-pack color process, the photographer would load a standard camera with two films, one orthochromatic, dyed red, and a panchromatic strip behind it. Color light would expose the cyan record on the ortho stock, which also acted as a filter, exposing only red light to the panchromatic film stock.
Medicine
Cyanosis is an abnormal blueness of the skin, usually a sign of poor oxygen intake; patients are typically described as being "cyanotic".
Cyanopsia is a color vision defect where vision is tinted blue. This can be a drug-induced side effect or experienced after cataract removal.
Gallery
| Physical sciences | Colors | Physics |
6111 | https://en.wikipedia.org/wiki/Chemical%20vapor%20deposition | Chemical vapor deposition | Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high-quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films.
In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber.
Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics.
The term chemical vapour deposition was coined in 1960 by John M. Blocher, Jr. who intended to differentiate chemical from physical vapour deposition (PVD).
Types
CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated.
Classified by operating conditions:
Atmospheric pressure CVD (APCVD) – CVD at atmospheric pressure.
Low-pressure CVD (LPCVD) – CVD at sub-atmospheric pressures. Many journal articles and commercial tools use the term reduced pressure CVD (RPCVD) especially for single wafer tools in place of LPCVD which dominates for multi-wafer furnace tube tools. Reduced pressures tend to reduce unwanted gas-phase reactions and improve film uniformity across the wafer.
Ultrahigh vacuum CVD (UHVCVD) – CVD at very low pressure, typically below 10−6 Pa (≈ 10−8 torr). Note that in other fields, a lower division between high and ultra-high vacuum is common, often 10−7 Pa.
Sub-atmospheric CVD (SACVD) – CVD at sub-atmospheric pressures. Uses tetraethyl orthosilicate (TEOS) and ozone to fill high aspect ratio Si structures with silicon dioxide (SiO2).
Most modern CVD is either LPCVD or UHVCVD.
Classified by physical characteristics of vapor:
Aerosol assisted CVD (AACVD) – CVD in which the precursors are transported to the substrate by means of a liquid/gas aerosol, which can be generated ultrasonically. This technique is suitable for use with non-volatile precursors.
Direct liquid injection CVD (DLICVD) – CVD in which the precursors are in liquid form (liquid or solid dissolved in a convenient solvent). Liquid solutions are injected in a vaporization chamber towards injectors (typically car injectors). The precursor vapors are then transported to the substrate as in classical CVD. This technique is suitable for use on liquid or solid precursors. High growth rates can be reached using this technique.
Classified by type of substrate heating:
Hot wall CVD – CVD in which the chamber is heated by an external power source and the substrate is heated by radiation from the heated chamber walls.
Cold wall CVD – CVD in which only the substrate is directly heated either by induction or by passing current through the substrate itself or a heater in contact with the substrate. The chamber walls are at room temperature.
Plasma methods (see also Plasma processing):
Microwave plasma-assisted CVD (MPCVD)
Plasma-enhanced CVD (PECVD) – CVD that utilizes plasma to enhance chemical reaction rates of the precursors. PECVD processing allows deposition at lower temperatures, which is often critical in the manufacture of semiconductors. The lower temperatures also allow for the deposition of organic coatings, such as plasma polymers, that have been used for nanoparticle surface functionalization.
Remote plasma-enhanced CVD (RPECVD) – Similar to PECVD except that the wafer substrate is not directly in the plasma discharge region. Removing the wafer from the plasma region allows processing temperatures down to room temperature.
Low-energy plasma-enhanced chemical vapor deposition (LEPECVD) - CVD employing a high density, low energy plasma to obtain epitaxial deposition of semiconductor materials at high rates and low temperatures.
Atomic-layer CVD (ALCVD) – Deposits successive layers of different substances to produce layered, crystalline films. See Atomic layer epitaxy.
Combustion chemical vapor deposition (CCVD) – Combustion Chemical Vapor Deposition or flame pyrolysis is an open-atmosphere, flame-based technique for depositing high-quality thin films and nanomaterials.
Hot filament CVD (HFCVD) – also known as catalytic CVD (Cat-CVD) or more commonly, initiated CVD, this process uses a hot filament to chemically decompose the source gases. The filament temperature and substrate temperature thus are independently controlled, allowing colder temperatures for better absorption rates at the substrate and higher temperatures necessary for decomposition of precursors to free radicals at the filament.
Hybrid physical-chemical vapor deposition (HPCVD) – This process involves both chemical decomposition of precursor gas and vaporization of a solid source.
Metalorganic chemical vapor deposition (MOCVD) – This CVD process is based on metalorganic precursors.
Rapid thermal CVD (RTCVD) – This CVD process uses heating lamps or other methods to rapidly heat the wafer substrate. Heating only the substrate rather than the gas or chamber walls helps reduce unwanted gas-phase reactions that can lead to particle formation.
Vapor-phase epitaxy (VPE)
Photo-initiated CVD (PICVD) – This process uses UV light to stimulate chemical reactions. It is similar to plasma processing, given that plasmas are strong emitters of UV radiation. Under certain conditions, PICVD can be operated at or near atmospheric pressure.
Laser chemical vapor deposition (LCVD) - This CVD process uses lasers to heat spots or lines on a substrate in semiconductor applications. In MEMS and in fiber production the lasers are used rapidly to break down the precursor gas—process temperature can exceed 2000 °C—to build up a solid structure in much the same way as laser sintering based 3-D printers build up solids from powders.
Uses
CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates, the applications for these films are anticipated in gas sensing and low-κ dielectrics. CVD techniques are advantageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores.
Commercially important materials prepared by CVD
Polysilicon
Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions:
SiHCl3 → Si + Cl2 + HCl
SiH4 → Si + 2 H2
This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it.
Silicon dioxide
Silicon dioxide (usually called simply "oxide" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows:
SiH4 + O2 → SiO2 + 2 H2
SiCl2H2 + 2 N2O → SiO2 + 2 N2 + 2 HCl
Si(OC2H5)4 → SiO2 + byproducts
The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of low- temperature oxide (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing.
Oxide may also be grown with impurities (alloying or "doping"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide ("P-glass") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen:
4 PH3 + 5 O2 → 2 P2O5 + 6 H2
Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing.
Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine.
Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems.
Silicon nitride
Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase:
3 SiH4 + 4 NH3 → Si3N4 + 12 H2
3 SiCl2H2 + 4 NH3 → Si3N4 + 6 HCl + 6 H2
Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (1016 Ω·cm and 10 MV/cm, respectively).
Another two reactions may be used in plasma to deposit SiNH:
2 SiH4 + N2 → 2 SiNH + 3 H2
SiH4 + NH3 → SiNH + 3 H2
These films have much less tensile stress, but worse electrical properties (resistivity 106 to 1015 Ω·cm, and dielectric strength 1 to 5 MV/cm).
Metals
Tungsten CVD, used for forming conductive contacts, vias, and plugs on a semiconductor device, is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways:
WF6 → W + 3 F2
WF6 + 3 H2 → W + 6 HF
Other metals, notably aluminium and copper, can be deposited by CVD. , commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminium can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds.
CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal M, the chloride deposition reaction is as follows:
2 MCl5 + 5 H2 → 2 M + 10 HCl
whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows:
M(CO)n → M + n CO
the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide.
Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation:
2 Nb(OC2H5)5 → Nb2O5 + 5 C2H5OC2H5
Graphene
Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet.
Carbon source
The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with.
Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved.
Use of catalyst
The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material.
The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process.
Physical conditions
Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene.
Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate.
On the other hand, temperatures used range from 800 to 1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs.
Carrier gas
Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate.
Chamber material
Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions.
Methods of analysis of results
Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples.
Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography.
Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism.
Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry.
Graphene nanoribbon
In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance.
Diamond
CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used.
Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free-standing diamonds of varying sizes. With CVD diamond, growth areas of greater than fifteen centimeters (six inches) in diameter have been achieved, and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications.
The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat-producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically.
CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word "diamond" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single-crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more.
Chalcogenides
Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements.
| Technology | Materials | null |
6113 | https://en.wikipedia.org/wiki/Chain%20rule | Chain rule | In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if is the function such that for every , then the chain rule is, in Lagrange's notation,
or, equivalently,
The chain rule may also be expressed in Leibniz's notation. If a variable depends on the variable , which itself depends on the variable (that is, and are dependent variables), then depends on as well, via the intermediate variable . In this case, the chain rule is expressed as
and
for indicating at which points the derivatives have to be evaluated.
In integration, the counterpart to the chain rule is the substitution rule.
Intuitive explanation
Intuitively, the chain rule states that knowing the instantaneous rate of change of relative to and that of relative to allows one to calculate the instantaneous rate of change of relative to as the product of the two rates of change.
As put by George F. Simmons: "If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man."
The relationship between this example and the chain rule is as follows. Let , and be the (variable) positions of the car, the bicycle, and the walking man, respectively. The rate of change of relative positions of the car and the bicycle is Similarly, So, the rate of change of the relative positions of the car and the walking man is
The rate of change of positions is the ratio of the speeds, and the speed is the derivative of the position with respect to the time; that is,
or, equivalently,
which is also an application of the chain rule.
History
The chain rule seems to have first been used by Gottfried Wilhelm Leibniz. He used it to calculate the derivative of as the composite of the square root function and the function . He first mentioned it in a 1676 memoir (with a sign error in the calculation). The common notation of the chain rule is due to Leibniz. Guillaume de l'Hôpital used the chain rule implicitly in his Analyse des infiniment petits. The chain rule does not appear in any of Leonhard Euler's analysis books, even though they were written over a hundred years after Leibniz's discovery.. It is believed that the first "modern" version of the chain rule appears in Lagrange's 1797 Théorie des fonctions analytiques; it also appears in Cauchy's 1823 Résumé des Leçons données a L’École Royale Polytechnique sur Le Calcul Infinitesimal.
Statement
The simplest form of the chain rule is for real-valued functions of one real variable. It states that if is a function that is differentiable at a point (i.e. the derivative exists) and is a function that is differentiable at , then the composite function is differentiable at , and the derivative is
The rule is sometimes abbreviated as
If and , then this abbreviated form is written in Leibniz notation as:
The points where the derivatives are evaluated may also be stated explicitly:
Carrying the same reasoning further, given functions with the composite function , if each function is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of Chain Rule, where the derivative is (in Leibniz's notation):
Applications
Composites of more than two functions
The chain rule can be applied to composites of more than two functions. To take the derivative of a composite of more than two functions, notice that the composite of , , and (in that order) is the composite of with . The chain rule states that to compute the derivative of , it is sufficient to compute the derivative of and the derivative of . The derivative of can be calculated directly, and the derivative of can be calculated by applying the chain rule again.
For concreteness, consider the function
This can be decomposed as the composite of three functions:
So that .
Their derivatives are:
The chain rule states that the derivative of their composite at the point is:
In Leibniz's notation, this is:
or for short,
The derivative function is therefore:
Another way of computing this derivative is to view the composite function as the composite of and h. Applying the chain rule in this manner would yield:
This is the same as what was computed above. This should be expected because .
Sometimes, it is necessary to differentiate an arbitrarily long composition of the form . In this case, define
where and when . Then the chain rule takes the form
or, in the Lagrange notation,
Quotient rule
The chain rule can be used to derive some well-known differentiation rules. For example, the quotient rule is a consequence of the chain rule and the product rule. To see this, write the function as the product . First apply the product rule:
To compute the derivative of , notice that it is the composite of with the reciprocal function, that is, the function that sends to . The derivative of the reciprocal function is . By applying the chain rule, the last expression becomes:
which is the usual formula for the quotient rule.
Derivatives of inverse functions
Suppose that has an inverse function. Call its inverse function so that we have . There is a formula for the derivative of in terms of the derivative of . To see this, note that and satisfy the formula
And because the functions and are equal, their derivatives must be equal. The derivative of is the constant function with value 1, and the derivative of is determined by the chain rule. Therefore, we have that:
To express as a function of an independent variable , we substitute for wherever it appears. Then we can solve for .
For example, consider the function . It has an inverse . Because , the above formula says that
This formula is true whenever is differentiable and its inverse is also differentiable. This formula can fail when one of these conditions is not true. For example, consider . Its inverse is , which is not differentiable at zero. If we attempt to use the above formula to compute the derivative of at zero, then we must evaluate . Since and , we must evaluate 1/0, which is undefined. Therefore, the formula fails in this case. This is not surprising because is not differentiable at zero.
Back propagation
The chain rule forms the basis of the back propagation algorithm, which is used in gradient descent of neural networks in deep learning (artificial intelligence).
Higher derivatives
Faà di Bruno's formula generalizes the chain rule to higher derivatives. Assuming that and , then the first few derivatives are:
Proofs
First proof
One proof of the chain rule begins by defining the derivative of the composite function , where we take the limit of the difference quotient for as approaches :
Assume for the moment that does not equal for any near . Then the previous expression is equal to the product of two factors:
If oscillates near , then it might happen that no matter how close one gets to , there is always an even closer such that . For example, this happens near for the continuous function defined by for and otherwise. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function as follows:
We will show that the difference quotient for is always equal to:
Whenever is not equal to , this is clear because the factors of cancel. When equals , then the difference quotient for is zero because equals , and the above product is zero because it equals times zero. So the above product is always equal to the difference quotient, and to show that the derivative of at exists and to determine its value, we need only show that the limit as goes to of the above product exists and determine its value.
To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are and . The latter is the difference quotient for at , and because is differentiable at by assumption, its limit as tends to exists and equals .
As for , notice that is defined wherever is. Furthermore, is differentiable at by assumption, so is continuous at , by definition of the derivative. The function is continuous at because it is differentiable at , and therefore is continuous at . So its limit as goes to exists and equals , which is .
This shows that the limits of both factors exist and that they equal and , respectively. Therefore, the derivative of at a exists and equals .
Second proof
Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function g is differentiable at a if there exists a real number g′(a) and a function ε(h) that tends to zero as h tends to zero, and furthermore
Here the left-hand side represents the true difference between the value of g at a and at , whereas the right-hand side represents the approximation determined by the derivative plus an error term.
In the situation of the chain rule, such a function ε exists because g is assumed to be differentiable at a. Again by assumption, a similar function also exists for f at g(a). Calling this function η, we have
The above definition imposes no constraints on η(0), even though it is assumed that η(k) tends to zero as k tends to zero. If we set , then η is continuous at 0.
Proving the theorem requires studying the difference as h tends to zero. The first step is to substitute for using the definition of differentiability of g at a:
The next step is to use the definition of differentiability of f at g(a). This requires a term of the form for some k. In the above equation, the correct k varies with h. Set and the right hand side becomes . Applying the definition of the derivative gives:
To study the behavior of this expression as h tends to zero, expand kh. After regrouping the terms, the right-hand side becomes:
Because ε(h) and η(kh) tend to zero as h tends to zero, the first two bracketed terms tend to zero as h tends to zero. Applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. Because the above expression is equal to the difference , by the definition of the derivative is differentiable at a and its derivative is
The role of Q in the first proof is played by η in this proof. They are related by the equation:
The need to define Q at g(a) is analogous to the need to define η at zero.
Third proof
Constantin Carathéodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule.
Under this definition, a function is differentiable at a point if and only if there is a function , continuous at and such that . There is at most one such function, and if is differentiable at then .
Given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions , continuous at , and , continuous at , and such that,
and
Therefore,
but the function given by is continuous at , and we get, for this
A similar approach works for continuously differentiable (vector-)functions of many variables. This method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be Lipschitz continuous, Hölder continuous, etc. Differentiation itself can be viewed as the polynomial remainder theorem (the little Bézout theorem, or factor theorem), generalized to an appropriate class of functions.
Proof via infinitesimals
If and then choosing infinitesimal we compute the corresponding and then the corresponding , so that
and applying the standard part we obtain
which is the chain rule.
Multivariable case
The full generalization of the chain rule to multi-variable functions (such as ) is rather technical. However, it is simpler to write in the case of functions of the form
where , and for each
As this case occurs often in the study of functions of a single variable, it is worth describing it separately.
Case of scalar-valued functions with multiple inputs
Let , and for each
To write the chain rule for the composition of functions
one needs the partial derivatives of with respect to its arguments. The usual notations for partial derivatives involve names for the arguments of the function. As these arguments are not named in the above formula, it is simpler and clearer to use D-Notation, and to denote by
the partial derivative of with respect to its th argument, and by
the value of this derivative at .
With this notation, the chain rule is
Example: arithmetic operations
If the function is addition, that is, if
then and . Thus, the chain rule gives
For multiplication
the partials are and . Thus,
The case of exponentiation
is slightly more complicated, as
and, as
It follows that
General rule: Vector-valued functions with multiple inputs
The simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. Consider differentiable functions and , and a point in . Let denote the total derivative of at and denote the total derivative of at . These two derivatives are linear transformations and , respectively, so they can be composed. The chain rule for total derivatives is that their composite is the total derivative of at :
or for short,
The higher-dimensional chain rule can be proved using a technique similar to the second proof given above.
Because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. The matrix corresponding to a total derivative is called a Jacobian matrix, and the composite of two derivatives corresponds to the product of their Jacobian matrices. From this perspective the chain rule therefore says:
or for short,
That is, the Jacobian of a composite function is the product of the Jacobians of the composed functions (evaluated at the appropriate points).
The higher-dimensional chain rule is a generalization of the one-dimensional chain rule. If , , and are 1, so that and , then the Jacobian matrices of and are . Specifically, they are:
The Jacobian of is the product of these matrices, so it is , as expected from the one-dimensional chain rule. In the language of linear transformations, is the function which scales a vector by a factor of and is the function which scales a vector by a factor of . The chain rule says that the composite of these two linear transformations is the linear transformation , and therefore it is the function that scales a vector by .
Another way of writing the chain rule is used when f and g are expressed in terms of their components as and . In this case, the above rule for Jacobian matrices is usually written as:
The chain rule for total derivatives implies a chain rule for partial derivatives. Recall that when the total derivative exists, the partial derivative in the -th coordinate direction is found by multiplying the Jacobian matrix by the -th basis vector. By doing this to the formula above, we find:
Since the entries of the Jacobian matrix are partial derivatives, we may simplify the above formula to get:
More conceptually, this rule expresses the fact that a change in the direction may change all of through , and any of these changes may affect .
In the special case where , so that is a real-valued function, then this formula simplifies even further:
This can be rewritten as a dot product. Recalling that , the partial derivative is also a vector, and the chain rule says that:
Example
Given where and , determine the value of and using the chain rule.
and
Higher derivatives of multivariable functions
Faà di Bruno's formula for higher-order derivatives of single-variable functions generalizes to the multivariable case. If is a function of as above, then the second derivative of is:
Further generalizations
All extensions of calculus have a chain rule. In most of these, the formula remains the same, though the meaning of that formula may be vastly different.
One generalization is to manifolds. In this situation, the chain rule represents the fact that the derivative of is the composite of the derivative of and the derivative of . This theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula.
The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds.
In differential algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings determines a morphism of Kähler differentials which sends an element to , the exterior differential of . The formula holds in this context as well.
The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a -manifold to a -manifold (its tangent bundle) and a -function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula .
There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) dXt with a twice-differentiable function f. In Itō's lemma, the derivative of the composite function depends not only on dXt and the derivative of f but also on the second derivative of f. The dependence on the second derivative is a consequence of the non-zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. This variant of the chain rule is not an example of a functor because the two functions being composed are of different types.
| Mathematics | Differential calculus | null |
6115 | https://en.wikipedia.org/wiki/P%20versus%20NP%20problem | P versus NP problem | The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
Here, "quickly" means an algorithm that solves the task and runs in polynomial time (as opposed to, say, exponential time) exists, meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be verified in polynomial time is "NP", standing for "nondeterministic polynomial time".
An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.
It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
Example
Consider the following yes/no problem: given an incomplete Sudoku grid of size , is there at least one legal solution where every row, column, and square contains the integers 1 through ? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.)
History
The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973).
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
Context
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).
In this theory, the class P consists of all decision problems (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:
Is P equal to NP?
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era."
NP-completeness
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known.
From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine M guaranteed to halt in polynomial time, does a polynomial-size input that M will accept exist? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Harder problems
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
Problems in NP not known to be in P or NP-complete
In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time
to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
Does P mean "easy"?
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common assumption in complexity theory; but there are caveats.
First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), and where h is the number of vertices in H.
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
Reasons to believe P ≠ NP or P = NP
Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience.
On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made:
DLIN vs NLIN
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN.
It is known that DLIN ≠ NLIN.
Consequences of solution
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
P = NP
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet.
Symmetric ciphers such as AES or 3DES, used for the encryption of communications data.
Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT.
These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP.
There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology.
These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:
Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says:
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
P ≠ NP
A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
Results about difficulty of proof
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP:
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms.
Logical characterizations
The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
Polynomial-time algorithms
No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
// Algorithm that accepts the NP-complete language SUBSET-SUM.
//
// this is a polynomial-time algorithm if and only if P = NP.
//
// "Polynomial-time" means it returns "yes" in polynomial time when
// the answer should be "yes", and runs forever when it is "no".
//
// Input: S = a finite set of integers
// Output: "yes" if any subset of S adds up to 0.
// Runs forever with no output otherwise.
// Note: "Program number M" is the program obtained by
// writing the integer M in binary, then
// considering that string of bits to be a
// program. Every possible program can be
// generated this way, though most do nothing
// because of syntax errors.
FOR K = 1...∞
FOR M = 1...K
Run program number M for K steps with input S
IF the program outputs a list of distinct integers
AND the integers are all in S
AND the integers sum to 0
THEN
OUTPUT "yes" and HALT
This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first.
Formal definitions
P and NP
A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning,
where
and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies two conditions:
M halts on all inputs w and
there exists such that , where O refers to the big O notation and
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of certificate and verifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier":
Let L be a language over a finite alphabet, Σ.
L ∈ NP if, and only if, there exists a binary relation and a positive integer k such that the following two conditions are satisfied:
For all , such that (x, y) ∈ R and ; and
the language over is decidable by a deterministic Turing machine in polynomial time.
A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L.
Not all verifiers must be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time.
Example
Let
Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
NP-completeness
There are many equivalent ways of describing NP-completeness.
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
L ∈ NP; and
any L''' in NP is polynomial-time-reducible to L (written as ), where if, and only if, the following two conditions are satisfied:
There exists f : Σ* → Σ* such that for all w in Σ* we have: ; and
there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w.
Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete.
Claimed solutions
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted.
Popular culture
The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of Elementary, "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP.
Similar problems
R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete.
A similar problem exists in the theory of algebraic complexity: VP vs. VNP'' problem. This problem has not been solved yet.
| Mathematics | Discrete mathematics | null |
6118 | https://en.wikipedia.org/wiki/Carnot%20heat%20engine | Carnot heat engine | A Carnot heat engine is a theoretical heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible. The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates.
A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine.
Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine.
The Carnot engine is a theoretical construct, useful for exploring the efficiency limits of other heat engines. An actual Carnot engine, however, would be completely impractical to build.
Carnot's diagram
In the adjacent diagram, from Carnot's 1824 work, Reflections on the Motive Power of Fire, there are "two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator." Carnot then explains how we can obtain motive power, i.e., "work", by carrying a certain quantity of heat from body A to body B.
It also acts as a cooler and hence can also act as a refrigerator.
Modern diagram
The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine. The figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the "working body" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, air, etc. Although in those early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled over a furnace; QC was typically removed by a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work, W, is transmitted by the movement of the piston as it is used to turn a crank-arm, which in turn was typically used to power a pulley so as to lift water out of flooded salt mines. Carnot defined work as "weight lifted through a height".
Carnot cycle
The Carnot cycle when acting as a heat engine consists of the following steps:
Reversible isothermal expansion of the gas at the "hot" temperature, (isothermal heat addition or absorption). During this step ( to ) the gas is allowed to expand and it does work on the surroundings. The temperature of the gas (the system) does not change during the process, and thus the expansion is isothermic. The gas expansion is propelled by absorption of heat energy and of entropy from the high temperature reservoir.
Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step ( to ) the piston and cylinder are assumed to be thermally insulated, thus they neither gain nor lose heat. The gas continues to expand, doing work on the surroundings, and losing an equivalent amount of internal energy. The gas expansion causes it to cool to the "cold" temperature, . The entropy remains unchanged.
Reversible isothermal compression of the gas at the "cold" temperature, (isothermal heat rejection) ( to ). Now the gas is exposed to the cold temperature reservoir while the surroundings do work on the gas by compressing it (such as through the return compression of a piston), while causing an amount of waste heat (with the standard sign convention for heat) and of entropy to flow out of the gas to the low temperature reservoir. (In magnitude, this is the same amount of entropy absorbed in step 1. The entropy decreases in isothermal compression since the multiplicity of the system decreases with the volume.) In terms of magnitude, the recompression work performed by the surroundings in this step is less than the work performed on the surroundings in step 1 because it occurs at a lower pressure due to the lower temperature (i.e. the resistance to compression is lower under step 3 than the force of expansion under step 1). We can refer to the first law of thermodynamics to explain this behavior: .
Isentropic compression of the gas (isentropic work input) ( to ). Once again the piston and cylinder are assumed to be thermally insulated and the cold temperature reservoir is removed. During this step, the surroundings continue to do work to further compress the gas and both the temperature and pressure rise now that the heat sink has been removed. This additional work increases the internal energy of the gas, compressing it and causing the temperature to rise to . The entropy remains unchanged. At this point the gas is in the same state as at the start of step 1.
Carnot's theorem
Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.
Explanation
This maximum efficiency is defined as above:
is the work done by the system (energy exiting the system as work),
is the heat put into the system (heat energy entering the system),
is the absolute temperature of the cold reservoir, and
is the absolute temperature of the hot reservoir.
A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient.
It is easily shown that the efficiency is maximum when the entire cyclic process is a reversible process. This means the total entropy of system and surroundings (the entropies of the hot furnace, the "working fluid" of the heat engine, and the cold sink) remains constant when the "working fluid" completes one cycle and returns to its original state. (In the general and more realistic case of an irreversible process, the total entropy of this combined system would increase.)
Since the "working fluid" comes back to the same state after one cycle, and entropy of the system is a state function, the change in entropy of the "working fluid" system is 0. Thus, it implies that the total entropy change of the furnace and sink is zero, for the process to be reversible and the efficiency of the engine to be maximum. This derivation is carried out in the next section.
The coefficient of performance (COP) of the heat engine is the reciprocal of its efficiency.
Efficiency of real heat engines
For a real heat engine, the total thermodynamic process is generally irreversible. The working fluid is brought back to its initial state after one cycle, and thus the change of entropy of the fluid system is 0, but the sum of the entropy changes in the hot and cold reservoir in this one cyclical process is greater than 0.
The internal energy of the fluid is also a state variable, so its total change in one cycle is 0. So the total work done by the system is equal to the net heat put into the system, the sum of > 0 taken up and the waste heat < 0 given off:
For real engines, stages 1 and 3 of the Carnot cycle, in which heat is absorbed by the "working fluid" from the hot reservoir, and released by it to the cold reservoir, respectively, no longer remain ideally reversible, and there is a temperature differential between the temperature of the reservoir and the temperature of the fluid while heat exchange takes place.
During heat transfer from the hot reservoir at to the fluid, the fluid would have a slightly lower temperature than , and the process for the fluid may not necessarily remain isothermal.
Let be the total entropy change of the fluid in the process of intake of heat.
where the temperature of the fluid is always slightly lesser than , in this process.
So, one would get:
Similarly, at the time of heat injection from the fluid to the cold reservoir one would have, for the magnitude of total entropy change < 0 of the fluid in the process of expelling heat:
where, during this process of transfer of heat to the cold reservoir, the temperature of the fluid is always slightly greater than .
We have only considered the magnitude of the entropy change here. Since the total change of entropy of the fluid system for the cyclic process is 0, we must have
The previous three equations, namely (), (), (), substituted into () to give:
For [ΔSh ≥ (Qh/Th)] +[ΔSc ≥ (Qc/Tc)] = 0
[ΔSh ≥ (Qh/Th)] = - [ΔSc ≥ (Qc/Tc)]
= [-ΔSc ≤ (-Qc/Tc)]
it is at least (Qh/Th) ≤ (-Qc/Tc)
Equations () and () combine to give
To derive this step needs two adiabatic processes involved to show an isentropic process property for the ratio of the changing volumes of two isothermal processes are equal.
Most importantly, since the two adiabatic processes are volume works without heat lost, and since the ratio of volume changes for this two processes are the same, so the works for these two adiabatic processes are the same with opposite direction to each other, namely, one direction is work done by the system and the other is work done on the system; therefore, heat efficiency only concerns the amount of work done by the heat absorbed comparing to the amount of heat absorbed by the system.
Therefore, (W/Qh) = (Qh - Qc) / Qh
= 1 - (Qc/Qh)
= 1 - (Tc/Th)
And, from ()
(Qh/Th) ≤ (-Qc/Tc) here Qc it is less than 0 (release heat)
(Tc/Th) ≤ (-Qc/Qh)
-(Tc/Th) ≥ (Qc/Qh)
1+[-(Tc/Th)] ≥ 1+(Qc/Qh)
1 - (Tc/Th) ≥ (Qh + Qc)/Qh here Qc<0,
1 - (Tc/Th) ≥ (Qh - Qc)/Qh
1 - (Tc/Th) ≥ W/Qh
Hence,
where is the efficiency of the real engine, and is the efficiency of the Carnot engine working between the same two reservoirs at the temperatures and . For the Carnot engine, the entire process is 'reversible', and Equation () is an equality. Hence, the efficiency of the real engine is always less than the ideal Carnot engine.
Equation () signifies that the total entropy of system and surroundings (the fluid and the two reservoirs) increases for the real engine, because (in a surroundings-based analysis) the entropy gain of the cold reservoir as flows into it at the fixed temperature , is greater than the entropy loss of the hot reservoir as leaves it at its fixed temperature . The inequality in Equation () is essentially the statement of the Clausius theorem.
According to the second theorem, "The efficiency of the Carnot engine is independent of the nature of the working substance".
The Carnot engine and Rudolf Diesel
In 1892 Rudolf Diesel patented an internal combustion engine inspired by the Carnot engine. Diesel knew a Carnot engine is an ideal that cannot be built, but he thought he had invented a working approximation. His principle was unsound, but in his struggle to implement it he developed a practical Diesel engine.
The conceptual problem was how to achieve isothermal expansion in an internal combustion engine, since burning fuel at the highest temperature of the cycle would only raise the temperature further. Diesel's patented solution was: having achieved the highest temperature just by compressing the air, to add a small amount of fuel at a controlled rate, such that heating caused by burning the fuel would be counteracted by cooling caused by air expansion as the piston moved. Hence all the heat from the fuel would be transformed into work during the isothermal expansion, as required by Carnot's theorem.
For the idea to work a small mass of fuel would have to be burnt in a huge mass of air. Diesel first proposed a working engine that would compress air to 250 atmospheres at , then cycle to one atmosphere at . However, this was well beyond the technological capabilities of the day, since it implied a compression ratio of 60:1. Such an engine, if it could have been built, would have had an efficiency of 73%. (In contrast, the best steam engines of his day achieved 7%.)
Accordingly, Diesel sought to compromise. He calculated that, were he to reduce the peak pressure to a less ambitious 90 atmospheres, he would sacrifice only 5% of the thermal efficiency. Seeking financial support, he published the "Theory and Construction of a Rational Heat Engine to Take the Place of the Steam Engine and All Presently Known Combustion Engines" (1893). Endorsed by scientific opinion, including Lord Kelvin, he won the backing of Krupp and . He clung to the Carnot cycle as a symbol. But years of practical work failed to achieve an isothermal combustion engine, nor could have done, since it requires such an enormous quantity of air that it cannot develop enough power to compress it. Furthermore, controlled fuel injection turned out to be no easy matter.
Even so, the Diesel engine slowly evolved over 25 years to become a practical high-compression air engine, its fuel injected near the end of the compression stroke and ignited by the heat of compression, capable by 1969 of 40% efficiency.
As a macroscopic construct
The Carnot heat engine is, ultimately, a theoretical construct based on an idealized thermodynamic system. On a practical human-scale level the Carnot cycle has proven a valuable model, as in advancing the development of the diesel engine. However, on a macroscopic scale limitations placed by the model's assumptions prove it impractical, and, ultimately, incapable of doing any work. As such, per Carnot's theorem, the Carnot engine may be thought as the theoretical limit of macroscopic scale heat engines rather than any practical device that could ever be built.
For example, for the isothermal expansion part of the Carnot cycle, the following
infinitesimal conditions must be satisfied simultaneously at every step in the expansion:
The hot reservoir temperature TH is infinitesimally higher than the system gas temperature T so heat flow (energy transfer) from the hot reservoir to the gas is made without increasing T (via infinitesimal work on the surroundings by the gas as another energy transfer); if TH is significantly higher than T, then T may be not uniform through the gas so the system would deviate from thermal equilibrium as well as not being a reversible process (i.e. not a Carnot cycle) or T might increase noticeably so it would not be an isothermal process.
The force externally applied on the piston (opposite to the internal force on the piston by the gas) needs to be infinitesimally reduced externally. Without this assistance, it would not be possible to follow a gas PV (Pressure-Volume) curve downward at a constant T since following this curve means that the gas-to-piston force decreases (P decreases) as the volume expands (the piston moves outward). If this assistance is so strong that the volume expansion is significant, the system may deviate from thermal equilibrium, and the process fail to be reversible (and thus not a Carnot cycle).
Such "infinitesimal" requirements as these (and others) cause the Carnot cycle to take an infinite amount of time, rendering the production of work impossible.
Other practical requirements that make the Carnot cycle impractical to realize include fine control of the gas, and perfect thermal contact with the surroundings (including high and low temperature reservoirs).
| Physical sciences | Thermodynamics | Physics |
6122 | https://en.wikipedia.org/wiki/Continuous%20function | Continuous function | In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is . Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function denoting the height of a growing flower at time would be considered continuous. In contrast, the function denoting the amount of money in a bank account at time would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
History
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of as follows: an infinitely small increment of the independent variable x always produces an infinitely small change of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
Real functions
Definition
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function with variable is continuous at the real number , if the limit of as tends to , is equal to
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval (the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function is continuous on its whole domain, which is the closed interval
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function and the tangent function When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions and are discontinuous at , and remain discontinuous whichever value is chosen for defining them at . A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let be a function defined on a subset of the set of real numbers.
This subset is the domain of . Some possible choices include
: i.e., is the whole set of real numbers. or, for and real numbers,
: is a closed interval, or
: is an open interval.
In the case of the domain being defined as an open interval, and do not belong to , and the values of and do not matter for continuity on .
Definition in terms of limits of functions
The function is continuous at some point of its domain if the limit of as x approaches c through the domain of f, exists and is equal to In mathematical notation, this is written as
In detail this means three conditions: first, has to be defined at (guaranteed by the requirement that is in the domain of ). Second, the limit of that equation has to exist. Third, the value of this limit must equal
(Here, we have assumed that the domain of f does not have any isolated points.)
Definition in terms of neighborhoods
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood there is a neighborhood in its domain such that whenever
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
Definition in terms of limits of sequences
One can instead require that for any sequence of points in the domain which converges to c, the corresponding sequence converges to In mathematical notation,
Weierstrass and Jordan definitions (epsilon–delta) of continuous functions
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function as above and an element of the domain , is said to be continuous at the point when the following holds: For any positive real number however small, there exists some positive real number such that for all in the domain of with the value of satisfies
Alternatively written, continuity of at means that for every there exists a such that for all :
More intuitively, we can say that if we want to get all the values to stay in some small neighborhood around we need to choose a small enough neighborhood for the values around If we can do that no matter how small the neighborhood is, then is continuous at
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval be entirely within the domain , but Jordan removed that restriction.
Definition in terms of control of the remainder
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function is called a control function if
C is non-decreasing
A function is C-continuous at if there exists such a neighbourhood that
A function is continuous in if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions a function is if it is for some For example, the Lipschitz and Hölder continuous functions of exponent below are defined by the set of control functions
respectively
Definition using oscillation
Continuity can also be defined in terms of oscillation: a function f is continuous at a point if and only if its oscillation at that point is zero; in symbols, A benefit of this definition is that it discontinuity: the oscillation gives how the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than (hence a set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given there is no that satisfies the definition, then the oscillation is at least and conversely if for every there is a desired the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
Definition using the hyperreals
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
Construction of continuous functions
Checking the continuity of a given function can be simplified by checking one of the above defining properties for the building blocks of the given function. It is straightforward to show that the sum of two functions, continuous on some domain, is also continuous on this domain. Given
then the
(defined by for all ) is continuous in
The same holds for the ,
(defined by for all )
is continuous in
Combining the above preservations of continuity and the continuity of constant functions and of the identity function one arrives at the continuity of all polynomial functions such as
(pictured on the right).
In the same way, it can be shown that the
(defined by for all such that )
is continuous in
This implies that, excluding the roots of the
(defined by for all , such that )
is also continuous on .
For example, the function (pictured)
is defined for all real numbers and is continuous at every such point. Thus, it is a continuous function. The question of continuity at does not arise since is not in the domain of There is no continuous function that agrees with for all
Since the function sine is continuous on all reals, the sinc function is defined and continuous for all real However, unlike the previous example, G be extended to a continuous function on real numbers, by the value to be 1, which is the limit of when x approaches 0, i.e.,
Thus, by setting
the sinc-function becomes a continuous function on all real numbers. The term is used in such cases when (re)defining values of a function to coincide with the appropriate limits make a function continuous at specific points.
A more involved construction of continuous functions is the function composition. Given two continuous functions
their composition, denoted as
and defined by is continuous.
This construction allows stating, for example, that
is continuous for all
Examples of discontinuous functions
An example of a discontinuous function is the Heaviside step function , defined by
Pick for instance . Then there is no around , i.e. no open interval with that will force all the values to be within the of , i.e. within . Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
is discontinuous at but continuous everywhere else. Yet another example: the function
is continuous everywhere apart from .
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
is nowhere continuous.
Properties
A useful lemma
Let be a function that is continuous at a point and be a value such Then throughout some neighbourhood of
Proof: By the definition of continuity, take , then there exists such that
Suppose there is a point in the neighbourhood for which then we have the contradiction
Intermediate value theorem
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function f is continuous on the closed interval and k is some number between and then there is some number such that
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on and and differ in sign, then, at some point must equal zero.
Extreme value theorem
The extreme value theorem states that if a function f is defined on a closed interval (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists with for all The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval (or any set that is not both closed and bounded), as, for example, the continuous function defined on the open interval (0,1), does not attain a maximum, being unbounded above.
Relation to differentiability and integrability
Every differentiable function
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
is everywhere continuous. However, it is not differentiable at (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted More generally, the set of functions
(from an open interval (or open subset of ) to the reals) such that f is times differentiable and such that the -th derivative of f is continuous is denoted See differentiability class. In the field of computer graphics, properties related (but not identical) to are sometimes called (continuity of position), (continuity of tangency), and (continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
Pointwise and uniform limits
Given a sequence
of functions such that the limit
exists for all , the resulting function is referred to as the pointwise limit of the sequence of functions The pointwise limit function need not be continuous, even if all functions are continuous, as the animation at the right shows. However, f is continuous if all functions are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
Directional Continuity
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number however small, there exists some number such that for all x in the domain with the value of will satisfy
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with yields the notion of functions. A function is continuous if and only if it is both right-continuous and left-continuous.
Semicontinuity
A function f is if, roughly, any jumps that might occur only go down, but not up. That is, for any there exists some number such that for all x in the domain with the value of satisfies
The reverse condition is .
Continuous functions between metric spaces
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set equipped with a function (called metric) that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces and and a function
then is continuous at the point (with respect to the given metrics) if for any positive real number there exists a positive real number such that all satisfying will also satisfy As in the case of real functions above, this is equivalent to the condition that for every sequence in with limit we have The latter condition can be weakened as follows: is continuous at the point if and only if for every convergent sequence in with limit , the sequence is a Cauchy sequence, and is in the domain of .
The set of points at which a function between metric spaces is continuous is a set – this follows from the definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
between normed vector spaces and (which are vector spaces equipped with a compatible norm, denoted ) is continuous if and only if it is bounded, that is, there is a constant such that
for all
Uniform, Hölder and Lipschitz continuity
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way depends on and c in the definition above. Intuitively, a function f as above is uniformly continuous if the does
not depend on the point c. More precisely, it is required that for every real number there exists such that for every with we have that Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all the inequality
holds. Any Hölder continuous function is uniformly continuous. The particular case is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
holds for any The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
Continuous functions between topological spaces
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
between two topological spaces X and Y is continuous if for every open set the inverse image
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology ), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
Continuity at a point
The translation in the language of neighborhoods of the -definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and is the largest subset of such that this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function is continuous at every point of if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given a map is continuous at if and only if whenever is a filter on that converges to in which is expressed by writing then necessarily in
If denotes the neighborhood filter at then is continuous at if and only if in Moreover, this happens if and only if the prefilter is a filter base for the neighborhood filter of in
Alternative definitions
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
Sequences and nets
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function is sequentially continuous if whenever a sequence in converges to a limit the sequence converges to Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
Proof. Assume that is continuous at (in the sense of continuity). Let be a sequence converging at (such a sequence always exists, for example, ); since is continuous at
For any such we can find a natural number such that for all
since converges at ; combining this with we obtain
Assume on the contrary that is sequentially continuous and proceed by contradiction: suppose is not continuous at
then we can take and call the corresponding point : in this way we have defined a sequence such that
by construction but , which contradicts the hypothesis of sequential continuity.
Closure operator and interior operator definitions
In terms of the interior operator, a function between topological spaces is continuous if and only if for every subset
In terms of the closure operator, is continuous if and only if for every subset
That is to say, given any element that belongs to the closure of a subset necessarily belongs to the closure of in If we declare that a point is a subset if then this terminology allows for a plain English description of continuity: is continuous if and only if for every subset maps points that are close to to points that are close to Similarly, is continuous at a fixed given point if and only if whenever is close to a subset then is close to
Instead of specifying topological spaces by their open subsets, any topology on can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset of a topological space to its topological closure satisfies the Kuratowski closure axioms. Conversely, for any closure operator there exists a unique topology on (specifically, ) such that for every subset is equal to the topological closure of in If the sets and are each associated with closure operators (both denoted by ) then a map is continuous if and only if for every subset
Similarly, the map that sends a subset of to its topological interior defines an interior operator. Conversely, any interior operator induces a unique topology on (specifically, ) such that for every is equal to the topological interior of in If the sets and are each associated with interior operators (both denoted by ) then a map is continuous if and only if for every subset
Filters and prefilters
Continuity can also be characterized in terms of filters. A function is continuous if and only if whenever a filter on converges in to a point then the prefilter converges in to This characterization remains true if the word "filter" is replaced by "prefilter."
Properties
If and are continuous, then so is the composition If is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology is said to be coarser than another topology (notation: ) if every open subset with respect to is also open with respect to Then, the identity map
is continuous if and only if (see also comparison of topologies). More generally, a continuous function
stays continuous if the topology is replaced by a coarser topology and/or is replaced by a finer topology.
Homeomorphisms
Symmetric to the concept of a continuous map is an open map, for which of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function need not be continuous. A bijective continuous function with a continuous inverse function is called a .
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
Defining topologies via continuous functions
Given a function
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions into all topological spaces X. Dually, a similar idea can be applied to maps
Related notions
If is a continuous function from some subset of a topological space then a of to is any continuous function such that for every which is a condition that often written as In words, it is any continuous function that restricts to on This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If is not continuous, then it could not possibly have a continuous extension. If is a Hausdorff space and is a dense subset of then a continuous extension of to if one exists, will be unique. The Blumberg theorem states that if is an arbitrary function then there exists a dense subset of such that the restriction is continuous; in other words, every function can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function between particular types of partially ordered sets and is continuous if for each directed subset of we have Here is the supremum with respect to the orderings in and respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
between two categories is called if it commutes with small limits. That is to say,
for any small (that is, indexed by a set as opposed to a class) diagram of objects in .
A is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function defined on a Lebesgue measurable set is called approximately continuous at a point if the approximate limit of at exists and equals . This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
| Mathematics | Calculus and analysis | null |
6123 | https://en.wikipedia.org/wiki/Curl%20%28mathematics%29 | Curl (mathematics) | In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.
A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve.
The notation is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in which also reveals the relation between curl (rotor), divergence, and gradient operators.
Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation for the curl.
The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839.
Definition
The curl of a vector field , denoted by , or , or , is an operator that maps functions in to functions in , and in particular, it maps continuously differentiable functions to continuous functions . It can be defined in several ways, to be mentioned below:
One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: if is any unit vector, the component of the curl of along the direction may be defined to be the limiting value of a closed line integral in a plane perpendicular to divided by the area enclosed, as the path of integration is contracted indefinitely around the point.
More specifically, the curl is defined at a point as
where the line integral is calculated along the boundary of the area in question, being the magnitude of the area. This equation defines the component of the curl of along the direction . The infinitesimal surfaces bounded by have as their normal. is oriented via the right-hand rule.
The above formula means that the component of the curl of a vector field along a certain axis is the infinitesimal area density of the circulation of the field in a plane perpendicular to that axis. This formula does not a priori define a legitimate vector field, for the individual circulation densities with respect to various axes a priori need not relate to each other in the same way as the components of a vector do; that they do indeed relate to each other in this precise manner must be proven separately.
To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface.
Another way one can define the curl vector of a function at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing divided by the volume enclosed, as the shell is contracted indefinitely around .
More specifically, the curl may be defined by the vector formula
where the surface integral is calculated along the boundary of the volume , being the magnitude of the volume, and pointing outward from the surface perpendicularly at every point in .
In this formula, the cross product in the integrand measures the tangential component of at each point on the surface , and points along the surface at right angles to the tangential projection of . Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation of around , and whose direction is at right angles to this circulation. The above formula says that the curl of a vector field at a point is the infinitesimal volume density of this "circulation vector" around the point.
To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume.
Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates:
The equation for each component can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices).
If are the Cartesian coordinates and are the orthogonal coordinates, then
is the length of the coordinate vector corresponding to . The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1.
Usage
In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived.
The notation has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra.
Expanded in 3-dimensional Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), is, for composed of (where the subscripts indicate the components of the vector, not partial derivatives):
where , , and are the unit vectors for the -, -, and -axes, respectively. This expands as follows:
Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection.
In a general coordinate system, the curl is given by
where denotes the Levi-Civita tensor, the covariant derivative, is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative:
where are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as:
Here and are the musical isomorphisms, and is the Hodge star operator. This formula shows how to calculate the curl of in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed.
Examples
Example 1
Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point.
The curl of the vector field at any point is given by the rotation of an infinitesimal area in the xy-plane (for z-axis component of the curl), zx-plane (for y-axis component of the curl) and yz-plane (for x-axis component of the curl vector). This can be seen in the examples below.
Example 2
The vector field
can be decomposed as
Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed.
Calculating the curl:
The resulting vector field describing the curl would at all points be pointing in the negative direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed.
Example 3
For the vector field
the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line , the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative direction. Inversely, if placed on , the object would rotate counterclockwise and the right-hand rule would result in a positive direction.
Calculating the curl:
The curl points in the negative direction when is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane .
Further examples
In a vector field describing the linear velocities of each part of a rotating disk in uniform circular motion, the curl has the same value at all points, and this value turns out to be exactly two times the vectorial angular velocity of the disk (oriented as usual by the right-hand rule). More generally, for any flowing mass, the linear velocity vector field at each point of the mass flow has a curl (the vorticity of the flow at that point) equal to exactly two times the local vectorial angular velocity of the mass about the point.
For any solid object subject to an external physical force (such as gravity or the electromagnetic force), one may consider the vector field representing the infinitesimal force-per-unit-volume contributions acting at each of the points of the object. This force field may create a net torque on the object about its center of mass, and this torque turns out to be directly proportional and vectorially parallel to the (vector-valued) integral of the curl of the force field over the whole volume.
Of the four Maxwell's equations, two—Faraday's law and Ampère's law—can be compactly expressed using curl. Faraday's law states that the curl of an electric field is equal to the opposite of the time rate of change of the magnetic field, while Ampère's law relates the curl of the magnetic field to the current and the time rate of change of the electric field.
Identities
In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields and can be shown to be
Interchanging the vector field and operator, we arrive at the cross product of a vector field with curl of a vector field:
where is the Feynman subscript notation, which considers only the variation due to the vector field (i.e., in this case, is treated as being constant in space).
Another example is the curl of a curl of a vector field. It can be shown that in general coordinates
and this identity defines the vector Laplacian of , symbolized as .
The curl of the gradient of any scalar field is always the zero vector field
which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives.
The divergence of the curl of any vector field is equal to zero:
If is a scalar valued function and is a vector field, then
Generalizations
The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and these all being 3-dimensional spaces.
Differential forms
In 3 dimensions, a differential 0-form is a real-valued function ; a differential 1-form is the following expression, where the coefficients are functions:
a differential 2-form is the formal sum, again with function coefficients:
and a differential 3-form is defined by a single term with one function as coefficient:
(Here the -coefficients are real functions of three variables; the "wedge products", e.g. , can be interpreted as some kind of oriented area elements, , etc.)
The exterior derivative of a -form in is defined as the -form from above—and in if, e.g.,
then the exterior derivative leads to
The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives,
and antisymmetry,
the twofold application of the exterior derivative yields (the zero -form).
Thus, denoting the space of -forms by and the exterior derivative by one gets a sequence:
Here is the space of sections of the exterior algebra vector bundle over Rn, whose dimension is the binomial coefficient ; note that for or . Writing only dimensions, one obtains a row of Pascal's triangle:
the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div.
Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, -forms can be identified with -vector fields (-forms are -covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an oriented vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between -vectors and -vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange -forms, -vector fields, -forms, and -vector fields; this is known as Hodge duality. Concretely, on this is given by:
1-forms and 1-vector fields: the 1-form corresponds to the vector field .
1-forms and 2-forms: one replaces by the dual quantity (i.e., omit ), and likewise, taking care of orientation: corresponds to , and corresponds to . Thus the form corresponds to the "dual form" .
Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields:
grad takes a scalar field (0-form) to a vector field (1-form);
curl takes a vector field (1-form) to a pseudovector field (2-form);
div takes a pseudovector field (2-form) to a pseudoscalar field (3-form)
On the other hand, the fact that corresponds to the identities
for any scalar field , and
for any vector field .
Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and -forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and -forms are always fiberwise -dimensional and can be identified with vector fields.
Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are
so the curl of a 1-vector field (fiberwise 4-dimensional) is a 2-vector field, which at each point belongs to 6-dimensional vector space, and so one has
which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way.
However, one can define a curl of a vector field as a 2-vector field in general, as described below.
Curl geometrically
2-vectors correspond to the exterior power ; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra of infinitesimal rotations. This has dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have , which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra
The curl of a 3-dimensional vector field which only depends on 2 coordinates (say and ) is simply a vertical vector field (in the direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page.
Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions.
Inverse
In the case where the divergence of a vector field is zero, a vector field exists such that . This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential.
If is a vector field with , then adding any gradient vector field to will result in another vector field such that as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law.
| Mathematics | Multivariable and vector calculus | null |
6136 | https://en.wikipedia.org/wiki/Carbon%20monoxide | Carbon monoxide | Carbon monoxide (chemical formula CO) is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes, the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry.
The most common source of carbon monoxide is the partial combustion of carbon-containing compounds. Numerous environmental and biological sources generate carbon monoxide. In industry, carbon monoxide is important in the production of many compounds, including drugs, fragrances, and fuels. Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change.
Indoors CO is one of the most acutely toxic contaminants affecting indoor air quality. CO may be emitted from tobacco smoke and generated from malfunctioning fuel burning stoves (wood, kerosene, natural gas, propane) and fuel burning heating systems (wood, oil, natural gas) and from blocked flues connected to these appliances. Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries.
Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with both cyanide anion CN− and molecular nitrogen N2.
Physical and chemical properties
Carbon monoxide is the simplest oxocarbon and is isoelectronic with other triply bonded diatomic species possessing 10 valence electrons, including the cyanide anion, the nitrosonium cation, boron monofluoride and molecular nitrogen. It has a molar mass of 28.0, which, according to the ideal gas law, makes it slightly less dense than air, whose average molar mass is 28.8.
The carbon and oxygen are connected by a triple bond that consists of a net two pi bonds and one sigma bond. The bond length between the carbon atom and the oxygen atom is 112.8 pm. This bond length is consistent with a triple bond, as in molecular nitrogen (N2), which has a similar bond length (109.76 pm) and nearly the same molecular mass. Carbon–oxygen double bonds are significantly longer, 120.8 pm in formaldehyde, for example. The boiling point (82 K) and melting point (68 K) are very similar to those of N2 (77 K and 63 K, respectively). The bond-dissociation energy of 1072 kJ/mol is stronger than that of N2 (942 kJ/mol) and represents the strongest chemical bond known.
The ground electronic state of carbon monoxide is a singlet state since there are no unpaired electrons.
Bonding and dipole moment
The strength of the C-O bond in carbon monoxide is indicated by the high frequency of its vibration, 2143 cm−1. For comparison, organic carbonyls such as ketones and esters absorb at around 1700 cm−1.
Carbon and oxygen together have a total of 10 electrons in the valence shell. Following the octet rule for both carbon and oxygen, the two atoms form a triple bond, with six shared electrons in three bonding molecular orbitals, rather than the usual double bond found in organic carbonyl compounds. Since four of the shared electrons come from the oxygen atom and only two from carbon, one bonding orbital is occupied by two electrons from oxygen, forming a dative or dipolar bond. This causes a C←O polarization of the molecule, with a small negative charge on carbon and a small positive charge on oxygen. The other two bonding orbitals are each occupied by one electron from carbon and one from oxygen, forming (polar) covalent bonds with a reverse C→O polarization since oxygen is more electronegative than carbon. In the free carbon monoxide molecule, a net negative charge δ– remains at the carbon end and the molecule has a small dipole moment of 0.122 D.
The molecule is therefore asymmetric: oxygen is more electron dense than carbon and is also slightly positively charged compared to carbon being negative.
Carbon monoxide has a computed fractional bond order of 2.6, indicating that the "third" bond is important but constitutes somewhat less than a full bond. Thus, in valence bond terms, –C≡O+ is the most important structure, while :C=O is non-octet, but has a neutral formal charge on each atom and represents the second most important resonance contributor. Because of the lone pair and divalence of carbon in this resonance structure, carbon monoxide is often considered to be an extraordinarily stabilized carbene. Isocyanides are compounds in which the O is replaced by an NR (R = alkyl or aryl) group and have a similar bonding scheme.
If carbon monoxide acts as a ligand, the polarity of the dipole may reverse with a net negative charge on the oxygen end, depending on the structure of the coordination complex.
| Physical sciences | Inorganic compounds | null |
6138 | https://en.wikipedia.org/wiki/Conjecture | Conjecture | In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them.
Resolution of conjectures
Proof
Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (1.2 trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample.
Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results.
A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details.
One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software.
When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others.
Disproof
Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller.
Independent conjectures
Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry).
In this case, if a proof uses this statement, researchers will often look for a new proof that does not require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular.
Conditional proofs
Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being.
These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type.
Important examples
Fermat's Last Theorem
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers , , and can satisfy the equation for any integer value of greater than two.
This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for "most difficult mathematical problems".
Four color theorem
In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.
Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.
The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain.
Hauptvermutung
The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze.
This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion.
The manifold version is true in dimensions . The cases were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively.
Weil conjectures
In mathematics, the Weil conjectures were some highly influential proposals by on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields.
A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with qk elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with qk elements.
Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by , the functional equation by , and the analogue of the Riemann hypothesis was proved by .
Poincaré conjecture
In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time.
After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct.
The Poincaré conjecture, before being proven, was one of the most important open questions in topology.
Riemann hypothesis
In mathematics, the Riemann hypothesis, proposed by , is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.
The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems.
P versus NP problem
The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution.
Other conjectures
Goldbach's conjecture
The twin prime conjecture
The Collatz conjecture
The Manin conjecture
The Maldacena conjecture
The Euler conjecture, proposed by Euler in the 18th century but for which counterexamples for a number of exponents (starting with n=4) were found beginning in the mid 20th century
The Hardy-Littlewood conjectures are a pair of conjectures concerning the distribution of prime numbers, the first of which expands upon the aforementioned twin prime conjecture. Neither one has either been proven or disproven, but it has been proven that both cannot simultaneously be true (i.e., at least one must be false). It has not been proven which one is false, but it is widely believed that the first conjecture is true and the second one is false.
The Langlands program is a far-reaching web of these ideas of 'unifying conjectures' that link different subfields of mathematics (e.g. between number theory and representation theory of Lie groups). Some of these conjectures have since been proved.
In other sciences
Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture.
| Mathematics | Basics | null |
6172 | https://en.wikipedia.org/wiki/Cantor%20set | Cantor set | In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and mentioned by German mathematician Georg Cantor in 1883.
Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned this ternary construction only in passing, as an example of a perfect set that is nowhere dense (, Anmerkungen zu §10, /p. 590).
More generally, in topology, a Cantor space is a topological space homeomorphic to the Cantor ternary set (equipped with its subspace topology). The Cantor set is naturally homeomorphic to the countable product of the discrete two point space . By a theorem of L. E. J. Brouwer, this is equivalent to being perfect, nonempty, compact, metrizable and zero dimensional.
Construction and formula of the ternary set
The Cantor ternary set is created by iteratively deleting the open middle third from a set of line segments. One starts by deleting the open middle third from the interval , leaving two line segments: . Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: .
The Cantor ternary set contains all points in the interval that are not deleted at any step in this infinite process. The same facts can be described recursively by setting
and
for , so that
for any .
The first six steps of this process are illustrated below.
Using the idea of self-similar transformations, and the explicit closed formulas for the Cantor set are
where every middle third is removed as the open interval from the closed interval surrounding it, or
where the middle third of the foregoing closed interval is removed by intersecting with
This process of removing middle thirds is a simple example of a finite subdivision rule. The complement of the Cantor ternary set is an example of a fractal string.
In arithmetical terms, the Cantor set consists of all real numbers of the unit interval that do not require the digit 1 in order to be expressed as a ternary (base 3) fraction. As the above diagram illustrates, each point in the Cantor set is uniquely located by a path through an infinitely deep binary tree, where the path turns left or right at each level according to which side of a deleted segment the point lies on. Representing each left turn with 0 and each right turn with 2 yields the ternary fraction for a point.
Mandelbrot's construction by "curdling"
In The Fractal Geometry of Nature, mathematician Benoit Mandelbrot provides a whimsical thought experiment to assist non-mathematical readers in imagining the construction of . His narrative begins with imagining a bar, perhaps of lightweight metal, in which the bar's matter "curdles" by iteratively shifting towards its extremities. As the bar's segments become smaller, they become thin, dense slugs that eventually grow too small and faint to see.CURDLING: The construction of the Cantor bar results from the process I call curdling. It begins with a round bar. It is best to think of it as having a very low density. Then matter "curdles" out of this bar's middle third into the end thirds, so that the positions of the latter remain unchanged. Next matter curdles out of the middle third of each end third into its end thirds, and so on ad infinitum until one is left with an infinitely large number of infinitely thin slugs of infinitely high density. These slugs are spaced along the line in the very specific fashion induced by the generating process. In this illustration, curdling (which eventually requires hammering!) stops when both the printer's press and our eye cease to follow; the last line is indistinguishable from the last but one: each of its ultimate parts is seen as a gray slug rather than two parallel black slugs.
Composition
Since the Cantor set is defined as the set of points not excluded, the proportion (i.e., measure) of the unit interval remaining can be found by total length removed. This total is the geometric progression
So that the proportion left is 1 − 1 = 0.
This calculation suggests that the Cantor set cannot contain any interval of non-zero length. It may seem surprising that there should be anything left—after all, the sum of the lengths of the removed intervals is equal to the length of the original interval. However, a closer look at the process reveals that there must be something left, since removing the "middle third" of each interval involved removing open sets (sets that do not include their endpoints). So removing the line segment (, ) from the original interval [0, 1] leaves behind the points and . Subsequent steps do not remove these (or other) endpoints, since the intervals removed are always internal to the intervals remaining. So the Cantor set is not empty, and in fact contains an uncountably infinite number of points (as follows from the above description in terms of paths in an infinite binary tree).
It may appear that only the endpoints of the construction segments are left, but that is not the case either. The number , for example, has the unique ternary form 0.020202... = . It is in the bottom third, and the top third of that third, and the bottom third of that top third, and so on. Since it is never in one of the middle segments, it is never removed. Yet it is also not an endpoint of any middle segment, because it is not a multiple of any power of 1/3.
All endpoints of segments are terminating ternary fractions and are contained in the set
which is a countably infinite set.
As to cardinality, almost all elements of the Cantor set are not endpoints of intervals, nor rational points like 1/4. The whole Cantor set is in fact not countable.
Properties
Cardinality
It can be shown that there are as many points left behind in this process as there were to begin with, and that therefore, the Cantor set is uncountable. To see this, we show that there is a function f from the Cantor set to the closed interval [0,1] that is surjective (i.e. f maps from onto [0,1]) so that the cardinality of is no less than that of [0,1]. Since is a subset of [0,1], its cardinality is also no greater, so the two cardinalities must in fact be equal, by the Cantor–Bernstein–Schröder theorem.
To construct this function, consider the points in the [0, 1] interval in terms of base 3 (or ternary) notation. Recall that the proper ternary fractions, more precisely: the elements of , admit more than one representation in this notation, as for example , that can be written as 0.13 = 3, but also as 0.0222...3 = 3, and , that can be written as 0.23 = 3 but also as 0.1222...3 = 3.
When we remove the middle third, this contains the numbers with ternary numerals of the form 0.1xxxxx...3 where xxxxx...3 is strictly between 00000...3 and 22222...3. So the numbers remaining after the first step consist of
Numbers of the form 0.0xxxxx...3 (including 0.022222...3 = 1/3)
Numbers of the form 0.2xxxxx...3 (including 0.222222...3 = 1)
This can be summarized by saying that those numbers with a ternary representation such that the first digit after the radix point is not 1 are the ones remaining after the first step.
The second step removes numbers of the form 0.01xxxx...3 and 0.21xxxx...3, and (with appropriate care for the endpoints) it can be concluded that the remaining numbers are those with a ternary numeral where neither of the first two digits is 1.
Continuing in this way, for a number not to be excluded at step n, it must have a ternary representation whose nth digit is not 1. For a number to be in the Cantor set, it must not be excluded at any step, it must admit a numeral representation consisting entirely of 0s and 2s.
It is worth emphasizing that numbers like 1, = 0.13 and = 0.213 are in the Cantor set, as they have ternary numerals consisting entirely of 0s and 2s: 1 = 0.222...3 = 3, = 0.0222...3 = 3 and = 0.20222...3 = 3.
All the latter numbers are "endpoints", and these examples are right limit points of . The same is true for the left limit points of , e.g. = 0.1222...3 = 3 = 3 and = 0.21222...3 = 3 = 3. All these endpoints are proper ternary fractions (elements of ) of the form , where denominator q is a power of 3 when the fraction is in its irreducible form. The ternary representation of these fractions terminates (i.e., is finite) or — recall from above that proper ternary fractions each have 2 representations — is infinite and "ends" in either infinitely many recurring 0s or infinitely many recurring 2s. Such a fraction is a left limit point of if its ternary representation contains no 1's and "ends" in infinitely many recurring 0s. Similarly, a proper ternary fraction is a right limit point of if it again its ternary expansion contains no 1's and "ends" in infinitely many recurring 2s.
This set of endpoints is dense in (but not dense in [0, 1]) and makes up a countably infinite set. The numbers in which are not endpoints also have only 0s and 2s in their ternary representation, but they cannot end in an infinite repetition of the digit 0, nor of the digit 2, because then it would be an endpoint.
The function from to [0,1] is defined by taking the ternary numerals that do consist entirely of 0s and 2s, replacing all the 2s by 1s, and interpreting the sequence as a binary representation of a real number. In a formula,
where
For any number y in [0,1], its binary representation can be translated into a ternary representation of a number x in by replacing all the 1s by 2s. With this, f(x) = y so that y is in the range of f. For instance if y = = 0.100110011001...2 = , we write x = = 0.200220022002...3 = . Consequently, f is surjective. However, f is not injective — the values for which f(x) coincides are those at opposing ends of one of the middle thirds removed. For instance, take
= 3 (which is a right limit point of and a left limit point of the middle third [, ]) and
= 3 (which is a left limit point of and a right limit point of the middle third [, ])
so
Thus there are as many points in the Cantor set as there are in the interval [0, 1] (which has the uncountable cardinality However, the set of endpoints of the removed intervals is countable, so there must be uncountably many numbers in the Cantor set which are not interval endpoints. As noted above, one example of such a number is , which can be written as 0.020202...3 = in ternary notation. In fact, given any , there exist such that . This was first demonstrated by Steinhaus in 1917, who proved, via a geometric argument, the equivalent assertion that for every . Since this construction provides an injection from to , we have as an immediate corollary. Assuming that for any infinite set (a statement shown to be equivalent to the axiom of choice by Tarski), this provides another demonstration that .
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
It has been conjectured that all algebraic irrational numbers are normal. Since members of the Cantor set are not normal in base 3, this would imply that all members of the Cantor set are either rational or transcendental.
Self-similarity
The Cantor set is the prototype of a fractal. It is self-similar, because it is equal to two copies of itself, if each copy is shrunk by a factor of 3 and translated. More precisely, the Cantor set is equal to the union of two functions, the left and right self-similarity transformations of itself, and , which leave the Cantor set invariant up to homeomorphism:
Repeated iteration of and can be visualized as an infinite binary tree. That is, at each node of the tree, one may consider the subtree to the left or to the right. Taking the set together with function composition forms a monoid, the dyadic monoid.
The automorphisms of the binary tree are its hyperbolic rotations, and are given by the modular group. Thus, the Cantor set is a homogeneous space in the sense that for any two points and in the Cantor set , there exists a homeomorphism with . An explicit construction of can be described more easily if we see the Cantor set as a product space of countably many copies of the discrete space . Then the map defined by is an involutive homeomorphism exchanging and .
Conservation law
It has been found that some form of conservation law is always responsible behind scaling and self-similarity. In the case of Cantor set it can be seen that the th moment (where is the fractal dimension) of all the surviving intervals at any stage of the construction process is equal to a constant which is one in the case of the Cantor set.
We know that there are intervals of size present in the system at the th step of its construction. Then if we label the surviving intervals as then the th moment is since .
The Hausdorff dimension of the Cantor set is equal to ln(2)/ln(3) ≈ 0.631.
Topological and analytical properties
Although "the" Cantor set typically refers to the original, middle-thirds Cantor set described above, topologists often talk about "a" Cantor set, which means any topological space that is homeomorphic (topologically equivalent) to it.
As the above summation argument shows, the Cantor set is uncountable but has Lebesgue measure 0. Since the Cantor set is the complement of a union of open sets, it itself is a closed subset of the reals, and therefore a complete metric space. Since it is also totally bounded, the Heine–Borel theorem says that it must be compact.
For any point in the Cantor set and any arbitrarily small neighborhood of the point, there is some other number with a ternary numeral of only 0s and 2s, as well as numbers whose ternary numerals contain 1s. Hence, every point in the Cantor set is an accumulation point (also called a cluster point or limit point) of the Cantor set, but none is an interior point. A closed set in which every point is an accumulation point is also called a perfect set in topology, while a closed subset of the interval with no interior points is nowhere dense in the interval.
Every point of the Cantor set is also an accumulation point of the complement of the Cantor set.
For any two points in the Cantor set, there will be some ternary digit where they differ — one will have 0 and the other 2. By splitting the Cantor set into "halves" depending on the value of this digit, one obtains a partition of the Cantor set into two closed sets that separate the original two points. In the relative topology on the Cantor set, the points have been separated by a clopen set. Consequently, the Cantor set is totally disconnected. As a compact totally disconnected Hausdorff space, the Cantor set is an example of a Stone space.
As a topological space, the Cantor set is naturally homeomorphic to the product of countably many copies of the space , where each copy carries the discrete topology. This is the space of all sequences in two digits
which can also be identified with the set of 2-adic integers. The basis for the open sets of the product topology are cylinder sets; the homeomorphism maps these to the subspace topology that the Cantor set inherits from the natural topology on the real line. This characterization of the Cantor space as a product of compact spaces gives a second proof that Cantor space is compact, via Tychonoff's theorem.
From the above characterization, the Cantor set is homeomorphic to the p-adic integers, and, if one point is removed from it, to the p-adic numbers.
The Cantor set is a subset of the reals, which are a metric space with respect to the ordinary distance metric; therefore the Cantor set itself is a metric space, by using that same metric. Alternatively, one can use the p-adic metric on : given two sequences , the distance between them is , where is the smallest index such that ; if there is no such index, then the two sequences are the same, and one defines the distance to be zero. These two metrics generate the same topology on the Cantor set.
We have seen above that the Cantor set is a totally disconnected perfect compact metric space. Indeed, in a sense it is the only one: every nonempty totally disconnected perfect compact metric space is homeomorphic to the Cantor set. See Cantor space for more on spaces homeomorphic to the Cantor set.
The Cantor set is sometimes regarded as "universal" in the category of compact metric spaces, since any compact metric space is a continuous image of the Cantor set; however this construction is not unique and so the Cantor set is not universal in the precise categorical sense. The "universal" property has important applications in functional analysis, where it is sometimes known as the representation theorem for compact metric spaces.
For any integer q ≥ 2, the topology on the group G = Zqω (the countable direct sum) is discrete. Although the Pontrjagin dual Γ is also Zqω, the topology of Γ is compact. One can see that Γ is totally disconnected and perfect - thus it is homeomorphic to the Cantor set. It is easiest to write out the homeomorphism explicitly in the case q = 2. (See Rudin 1962 p 40.)
Measure and probability
The Cantor set can be seen as the compact group of binary sequences, and as such, it is endowed with a natural Haar measure. When normalized so that the measure of the set is 1, it is a model of an infinite sequence of coin tosses. Furthermore, one can show that the usual Lebesgue measure on the interval is an image of the Haar measure on the Cantor set, while the natural injection into the ternary set is a canonical example of a singular measure. It can also be shown that the Haar measure is an image of any probability, making the Cantor set a universal probability space in some ways.
In Lebesgue measure theory, the Cantor set is an example of a set which is uncountable and has zero measure. In contrast, the set has a Hausdorff measure of 1 in its dimension of log 2 / log 3.
Cantor numbers
If we define a Cantor number as a member of the Cantor set, then
Every real number in [0, 2] is the sum of two Cantor numbers.
Between any two Cantor numbers there is a number that is not a Cantor number.
Descriptive set theory
The Cantor set is a meagre set (or a set of first category) as a subset of [0,1] (although not as a subset of itself, since it is a Baire space). The Cantor set thus demonstrates that notions of "size" in terms of cardinality, measure, and (Baire) category need not coincide. Like the set , the Cantor set is "small" in the sense that it is a null set (a set of measure zero) and it is a meagre subset of [0,1]. However, unlike , which is countable and has a "small" cardinality, , the cardinality of is the same as that of [0,1], the continuum , and is "large" in the sense of cardinality. In fact, it is also possible to construct a subset of [0,1] that is meagre but of positive measure and a subset that is non-meagre but of measure zero: By taking the countable union of "fat" Cantor sets of measure (see Smith–Volterra–Cantor set below for the construction), we obtain a set which has a positive measure (equal to 1) but is meagre in [0,1], since each is nowhere dense. Then consider the set . Since , cannot be meagre, but since , must have measure zero.
Variants
Smith–Volterra–Cantor set
Instead of repeatedly removing the middle third of every piece as in the Cantor set, we could also keep removing any other fixed percentage (other than 0% and 100%) from the middle. In the case where the middle of the interval is removed, we get a remarkably accessible case — the set consists of all numbers in [0,1] that can be written as a decimal consisting entirely of 0s and 9s. If a fixed percentage is removed at each stage, then the limiting set will have measure zero, since the length of the remainder as for any such that .
On the other hand, "fat Cantor sets" of positive measure can be generated by removal of smaller fractions of the middle of the segment in each iteration. Thus, one can construct sets homeomorphic to the Cantor set that have positive Lebesgue measure while still being nowhere dense. If an interval of length () is removed from the middle of each segment at the nth iteration, then the total length removed is , and the limiting set will have a Lebesgue measure of . Thus, in a sense, the middle-thirds Cantor set is a limiting case with . If , then the remainder will have positive measure with . The case is known as the Smith–Volterra–Cantor set, which has a Lebesgue measure of .
Stochastic Cantor set
One can modify the construction of the Cantor set by dividing randomly instead of equally. Besides, to incorporate time we can divide only one of the available intervals at each step instead of dividing all the available intervals. In the case of stochastic triadic Cantor set the resulting process can be described by the following rate equation
and for the stochastic dyadic Cantor set
where is the number of intervals of size between and . In the case of triadic Cantor set the fractal dimension is which is
less than its deterministic counterpart . In the case of stochastic dyadic Cantor set
the fractal dimension is which is again less than that of its deterministic counterpart . In the case of stochastic dyadic Cantor set the solution for exhibits dynamic scaling as its solution in the long-time limit is where the fractal dimension of the stochastic dyadic Cantor set . In either case, like triadic Cantor set, the th moment () of stochastic triadic and dyadic Cantor set too are conserved quantities.
Cantor dust
Cantor dust is a multi-dimensional version of the Cantor set. It can be formed by taking a finite Cartesian product of the Cantor set with itself, making it a Cantor space. Like the Cantor set, Cantor dust has zero measure.
A different 2D analogue of the Cantor set is the Sierpinski carpet, where a square is divided up into nine smaller squares, and the middle one removed. The remaining squares are then further divided into nine each and the middle removed, and so on ad infinitum. One 3D analogue of this is the Menger sponge.
Historical remarks
Cantor introduced what we call today the Cantor ternary set as an example "of a perfect point-set, which is not everywhere-dense in any interval, however small." Cantor described in terms of ternary expansions, as "the set of all real numbers given by the formula: where the coefficients arbitrarily take the two values 0 and 2, and the series can consist of a finite number or an infinite number of elements."
A topological space is perfect if all its points are limit points or, equivalently, if it coincides with its derived set . Subsets of the real line, like , can be seen as topological spaces under the induced subspace topology.
Cantor was led to the study of derived sets by his results on uniqueness of trigonometric series. The latter did much to set him on the course for developing an abstract, general theory of infinite sets.
Benoit Mandelbrot wrote much on Cantor dusts and their relation to natural fractals and statistical physics. He further reflected on the puzzling or even upsetting nature of such structures to those in the mathematics and physics community. In The Fractal geometry of Nature, he described how "When I started on this topic in 1962, everyone was agreeing that Cantor dusts are at least as monstrous as the Koch and Peano curves," and added that "every self-respecting physicist was automatically turned off by a mention of Cantor, ready to run a mile from anyone claiming to be interesting in science."
| Mathematics | Other | null |
6173 | https://en.wikipedia.org/wiki/Cardinal%20number | Cardinal number | In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter (aleph) marked with subscript indicating their rank among the infinite cardinals.
Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets.
There is a transfinite sequence of cardinal numbers:
This sequence starts with the natural numbers including zero (finite cardinals), which are followed by the aleph numbers. The aleph numbers are indexed by ordinal numbers. If the axiom of choice is true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true (see ), there are infinite cardinals that are not aleph numbers.
Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets.
History
The notion of cardinality, as now understood, was formulated by Georg Cantor, the originator of set theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are not equal, but have the same cardinality, namely three. This is established by the existence of a bijection (i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}.
Cantor applied his concept of bijection to infinite sets (for example the set of natural numbers N = {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection with N denumerable (countably infinite) sets, which all share the same cardinal number. This cardinal number is called , aleph-null. He called the cardinal numbers of infinite sets transfinite cardinal numbers.
Cantor proved that any unbounded subset of N has the same cardinality as N, even though this might appear to run contrary to intuition. He also proved that the set of all ordered pairs of natural numbers is denumerable; this implies that the set of all rational numbers is also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all real algebraic numbers is also denumerable. Each real algebraic number z may be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple (a0, a1, ..., an), ai ∈ Z together with a pair of rationals (b0, b1) such that z is the unique root of the polynomial with coefficients (a0, a1, ..., an) that lies in the interval (b0, b1).
In his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that of N. His proof used an argument with nested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simpler diagonal argument. The new cardinal number of the set of real numbers is called the cardinality of the continuum and Cantor used the symbol for it.
Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number (, aleph-null), and that for every cardinal number there is a next-larger cardinal
His continuum hypothesis is the proposition that the cardinality of the set of real numbers is the same as . This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
Motivation
In informal use, a cardinal number is what is normally referred to as a counting number, provided that 0 is included: 0, 1, 2, .... They may be identified with the natural numbers beginning with 0. The counting numbers are exactly what can be defined formally as the finite cardinal numbers. Infinite cardinals only occur in higher-level mathematics and logic.
More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d',...>, and we can construct the set {a,b,c}, which has 3 elements.
However, when dealing with infinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads to ordinal numbers, while the size aspect is generalized by the cardinal numbers described here.
The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or "bigness" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions.
A set Y is at least as big as a set X if there is an injective mapping from the elements of X to the elements of Y. An injective mapping identifies each element of the set X with a unique element of the set Y. This is most easily understood by an example; suppose we have the sets X = {1,2,3} and Y = {a,b,c,d}, then using this notion of size, we would observe that there is a mapping:
1 → a
2 → b
3 → c
which is injective, and hence conclude that Y has cardinality greater than or equal to X. The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily a bijective mapping. The advantage of this notion is that it can be extended to infinite sets.
We can then extend this to an equality-style relation. Two sets X and Y are said to have the same cardinality if there exists a bijection between X and Y. By the Schroeder–Bernstein theorem, this is equivalent to there being both an injective mapping from X to Y, and an injective mapping from Y to X. We then write |X| = |Y|. The cardinal number of X itself is often defined as the least ordinal a with |a| = |X|. This is called the von Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality as some ordinal; this statement is the well-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects.
The classic example used is that of the infinite hotel paradox, also called Hilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping:
1 → 2
2 → 3
3 → 4
...
n → n + 1
...
With this assignment, we can see that the set {1,2,3,...} has the same cardinality as the set {2,3,4,...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., a Dedekind-infinite set); in this case {2,3,4,...} is a proper subset of {1,2,3,...}.
When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object "one greater than infinity" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, called ordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers.
It can be proved that the cardinality of the real numbers is greater than that of the natural numbers just described. This can be visualized using Cantor's diagonal argument; classic questions of cardinality (for instance the continuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals.
Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to as equipotence, equipollence, or equinumerosity. It is thus said that two sets with the same cardinality are, respectively, equipotent, equipollent, or equinumerous.
Formal definition
Formally, assuming the axiom of choice, the cardinality of a set X is the least ordinal number α such that there is a bijection between X and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a set X (implicit in Cantor and explicit in Frege and Principia Mathematica) is as the class [X] of all sets that are equinumerous with X. This does not work in ZFC or other related systems of axiomatic set theory because if X is non-empty, this collection is too large to be a set. In fact, for X ≠ ∅ there is an injection from the universe into [X] by mapping a set m to {m} × X, and so by the axiom of limitation of size, [X] is a proper class. The definition does work however in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with X that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set).
Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example, in ordinal arithmetic while in cardinal arithmetic, although the von Neumann assignment puts . On the other hand, Scott's trick implies that the cardinal number 0 is , which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets.
Formally, the order among cardinal numbers is defined as follows: |X| ≤ |Y| means that there exists an injective function from X to Y. The Cantor–Bernstein–Schroeder theorem states that if |X| ≤ |Y| and |Y| ≤ |X| then |X| = |Y|. The axiom of choice is equivalent to the statement that given two sets X and Y, either |X| ≤ |Y| or |Y| ≤ |X|.
A set X is Dedekind-infinite if there exists a proper subset Y of X with |X| = |Y|, and Dedekind-finite if such a subset does not exist. The finite cardinals are just the natural numbers, in the sense that a set X is finite if and only if |X| = |n| = n for some natural number n. Any other set is infinite.
Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinal (aleph null or aleph-0, where aleph is the first letter in the Hebrew alphabet, represented ) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinality ). The next larger cardinal is denoted by , and so on. For every ordinal α, there is a cardinal number and this list exhausts all infinite cardinal numbers.
Cardinal arithmetic
We can define arithmetic operations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic.
Successor cardinal
If the axiom of choice holds, then every cardinal κ has a successor, denoted κ+, where κ+ > κ and there are no cardinals between κ and its successor. (Without the axiom of choice, using Hartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ+ such that ) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from the successor ordinal.
Cardinal addition
If X and Y are disjoint, addition is given by the union of X and Y. If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replace X by X×{0} and Y by Y×{1}).
Zero is an additive identity κ + 0 = 0 + κ = κ.
Addition is associative (κ + μ) + ν = κ + (μ + ν).
Addition is commutative κ + μ = μ + κ.
Addition is non-decreasing in both arguments:
Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If either κ or μ is infinite, then
Subtraction
Assuming the axiom of choice and, given an infinite cardinal σ and a cardinal μ, there exists a cardinal κ such that μ + κ = σ if and only if μ ≤ σ. It will be unique (and equal to σ) if and only if μ < σ.
Cardinal multiplication
The product of cardinals comes from the Cartesian product.
κ·0 = 0·κ = 0.
κ·μ = 0 → (κ = 0 or μ = 0).
One is a multiplicative identity κ·1 = 1·κ = κ.
Multiplication is associative (κ·μ)·ν = κ·(μ·ν).
Multiplication is commutative κ·μ = μ·κ.
Multiplication is non-decreasing in both arguments:
κ ≤ μ → (κ·ν ≤ μ·ν and ν·κ ≤ ν·μ).
Multiplication distributes over addition:
κ·(μ + ν) = κ·μ + κ·ν and
(μ + ν)·κ = μ·κ + ν·κ.
Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If either κ or μ is infinite and both are non-zero, then
Thus the product of two infinite cardinal numbers is equal to their sum.
Division
Assuming the axiom of choice and, given an infinite cardinal π and a non-zero cardinal μ, there exists a cardinal κ such that μ · κ = π if and only if μ ≤ π. It will be unique (and equal to π) if and only if μ < π.
Cardinal exponentiation
Exponentiation is given by
where XY is the set of all functions from Y to X. It is easy to check that the right-hand side depends only on and .
κ0 = 1 (in particular 00 = 1), see empty function.
If 1 ≤ μ, then 0μ = 0.
1μ = 1.
κ1 = κ.
κμ + ν = κμ·κν.
κμ · ν = (κμ)ν.
(κ·μ)ν = κν·μν.
Exponentiation is non-decreasing in both arguments:
(1 ≤ ν and κ ≤ μ) → (νκ ≤ νμ) and
(κ ≤ μ) → (κν ≤ μν).
2|X| is the cardinality of the power set of the set X and Cantor's diagonal argument shows that 2|X| > |X| for any set X. This proves that no largest cardinal exists (because for any cardinal κ, we can always find a larger cardinal 2κ). In fact, the class of cardinals is a proper class. (This proof fails in some set theories, notably New Foundations.)
All the remaining propositions in this section assume the axiom of choice:
If κ and μ are both finite and greater than 1, and ν is infinite, then κν = μν.
If κ is infinite and μ is finite and non-zero, then κμ = κ.
If 2 ≤ κ and 1 ≤ μ and at least one of them is infinite, then:
Max (κ, 2μ) ≤ κμ ≤ Max (2κ, 2μ).
Using König's theorem, one can prove κ < κcf(κ) and κ < cf(2κ) for any infinite cardinal κ, where cf(κ) is the cofinality of κ.
Roots
Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 0, the cardinal ν satisfying will be .
Logarithms
Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 1, there may or may not be a cardinal λ satisfying . However, if such a cardinal exists, it is infinite and less than κ, and any finite cardinality ν greater than 1 will also satisfy .
The logarithm of an infinite cardinal number κ is defined as the least cardinal number μ such that κ ≤ 2μ. Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study of cardinal invariants of topological spaces, though they lack some of the properties that logarithms of positive real numbers possess.
The continuum hypothesis
The continuum hypothesis (CH) states that there are no cardinals strictly between and The latter cardinal number is also often denoted by ; it is the cardinality of the continuum (the set of real numbers). In this case
Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal , there are no cardinals strictly between and . Both the continuum hypothesis and the generalized continuum hypothesis have been proved to be independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC).
Indeed, Easton's theorem shows that, for regular cardinals , the only restrictions ZFC places on the cardinality of are that , and that the exponential function is non-decreasing.
| Mathematics | Basics | null |
6174 | https://en.wikipedia.org/wiki/Cardinality | Cardinality | In mathematics, cardinality describes a relationship between sets which compares their relative size. For example, the sets and are the same size as they each contain 3 elements. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between different types of infinity, and to perform arithmetic on them. There are two notions often used when referring to cardinality: one which compares sets directly using bijections and injections, and another which uses cardinal numbers.
The cardinality of a set may also be called its size, when no confusion with other notions of size is possible.
When two sets, and , have the same cardinality, it is usually written as ; however, if referring to the cardinal number of an individual set , it is simply denoted , with a vertical bar on each side; this is the same notation as absolute value, and the meaning depends on context. The cardinal number of a set may alternatively be denoted by , , , or .
History
A crude sense of cardinality, an awareness that groups of things or events compare with other groups by containing more, fewer, or the same number of instances, is observed in a variety of present-day animal species, suggesting an origin millions of years ago. Human expression of cardinality is seen as early as years ago, with equating the size of a group with a group of recorded notches, or a representative collection of other things, such as sticks and shells. The abstraction of cardinality as a number is evident by 3000 BCE, in Sumerian mathematics and the manipulation of numbers without reference to a specific group of things or events.
From the 6th century BCE, the writings of Greek philosophers show hints of the cardinality of infinite sets. While they considered the notion of infinity as an endless series of actions, such as adding 1 to a number repeatedly, they did not consider the size of an infinite set of numbers to be a thing. The ancient Greek notion of infinity also considered the division of things into parts repeated without limit. In Euclid's Elements, commensurability was described as the ability to compare the length of two line segments, a and b, as a ratio, as long as there were a third segment, no matter how small, that could be laid end-to-end a whole number of times into both a and b. But with the discovery of irrational numbers, it was seen that even the infinite set of all rational numbers was not enough to describe the length of every possible line segment. Still, there was no concept of infinite sets as something that had cardinality.
To better understand infinite sets, a notion of cardinality was formulated by Georg Cantor, the originator of set theory. He examined the process of equating two sets with bijection, a one-to-one correspondence between the elements of two sets based on a unique relationship. In 1891, with the publication of Cantor's diagonal argument, he demonstrated that there are sets of numbers that cannot be placed in one-to-one correspondence with the set of natural numbers, i.e. uncountable sets that contain more elements than there are in the infinite set of natural numbers.
Comparing sets
While the cardinality of a finite set is simply comparable to its number of elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets (some of which are possibly infinite).
Definition 1: =
Two sets have the same cardinality if there exists a bijection (a.k.a., one-to-one correspondence) from to , that is, a function from to that is both injective and surjective. Such sets are said to be equipotent, equipollent, or equinumerous.
For example, the set of non-negative even numbers has the same cardinality as the set of natural numbers, since the function is a bijection from to (see picture).
For finite sets and , if some bijection exists from to , then each injective or surjective function from to is a bijection. This is no longer true for infinite and . For example, the function from to , defined by is injective, but not surjective since 2, for instance, is not mapped to, and from to , defined by (see: modulo operation) is surjective, but not injective, since 0 and 1 for instance both map to 0. Neither nor can challenge , which was established by the existence of .
Definition 2: ≤
has cardinality less than or equal to the cardinality of , if there exists an injective function from into .
If and , then (a fact known as Schröder–Bernstein theorem). The axiom of choice is equivalent to the statement that or for every and .
Definition 3: <
has cardinality strictly less than the cardinality of , if there is an injective function, but no bijective function, from to .
For example, the set of all natural numbers has cardinality strictly less than its power set , because is an injective function from to , and it can be shown that no function from to can be bijective (see picture). By a similar argument, has cardinality strictly less than the cardinality of the set of all real numbers. For proofs, see Cantor's diagonal argument or Cantor's first uncountability proof.
Cardinal numbers
In the above section, "cardinality" of a set was defined functionally. In other words, it was not defined as a specific object itself. However, such an object can be defined as follows.
The relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set A under this relation, then, consists of all those sets which have the same cardinality as A. There are two ways to define the "cardinality of a set":
The cardinality of a set A is defined as its equivalence class under equinumerosity.
A representative set is designated for each equivalence class. The most common choice is the initial ordinal in that class. This is usually taken as the definition of cardinal number in axiomatic set theory.
Assuming the axiom of choice, the cardinalities of the infinite sets are denoted
For each ordinal , is the least cardinal number greater than .
The cardinality of the natural numbers is denoted aleph-null (), while the cardinality of the real numbers is denoted by "" (a lowercase fraktur script "c"), and is also referred to as the cardinality of the continuum. Cantor showed, using the diagonal argument, that . We can show that , this also being the cardinality of the set of all subsets of the natural numbers.
The continuum hypothesis says that , i.e. is the smallest cardinal number bigger than , i.e. there is no set whose cardinality is strictly between that of the integers and that of the real numbers. The continuum hypothesis is independent of ZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent. For more detail, see § Cardinality of the continuum below.
Finite, countable and uncountable sets
If the axiom of choice holds, the law of trichotomy holds for cardinality. Thus we can make the following definitions:
Any set X with cardinality less than that of the natural numbers, or | X | < | N |, is said to be a finite set.
Any set X that has the same cardinality as the set of the natural numbers, or | X | = | N | = , is said to be a countably infinite set.
Any set X with cardinality greater than that of the natural numbers, or | X | > | N |, for example | R | = > | N |, is said to be uncountable.
Infinite sets
Our intuition gained from finite sets breaks down when dealing with infinite sets. In the late 19th century Georg Cantor, Gottlob Frege, Richard Dedekind and others rejected the view that the whole cannot be the same size as the part. One example of this is Hilbert's paradox of the Grand Hotel.
Indeed, Dedekind defined an infinite set as one that can be placed into a one-to-one correspondence with a strict subset (that is, having the same size in Cantor's sense); this notion of infinity is called Dedekind infinite. Cantor introduced the cardinal numbers, and showed—according to his bijection-based definition of size—that some infinite sets are greater than others. The smallest infinite cardinality is that of the natural numbers ().
Cardinality of the continuum
One of Cantor's most important results was that the cardinality of the continuum () is greater than that of the natural numbers (); that is, there are more real numbers R than natural numbers N. Namely, Cantor showed that (see Beth one) satisfies:
(see Cantor's diagonal argument or Cantor's first uncountability proof).
The continuum hypothesis states that there is no cardinal number between the cardinality of the reals and the cardinality of the natural numbers, that is,
However, this hypothesis can neither be proved nor disproved within the widely accepted ZFC axiomatic set theory, if ZFC is consistent.
Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space. These results are highly counterintuitive, because they imply that there exist proper subsets and proper supersets of an infinite set S that have the same size as S, although S contains elements that do not belong to its subsets, and the supersets of S contain elements that are not included in it.
The first of these results is apparent by considering, for instance, the tangent function, which provides a one-to-one correspondence between the interval (−½π, ½π) and R (see also Hilbert's paradox of the Grand Hotel).
The second result was first demonstrated by Cantor in 1878, but it became more apparent in 1890, when Giuseppe Peano introduced the space-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, or hypercube, or finite-dimensional space. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtain such a proof.
Cantor also showed that sets with cardinality strictly greater than exist (see his generalized diagonal argument and theorem). They include, for instance:
the set of all subsets of R, i.e., the power set of R, written P(R) or 2R
the set RR of all functions from R to R
Both have cardinality
(see Beth two).
The cardinal equalities and can be demonstrated using cardinal arithmetic:
Examples and properties
If X = {a, b, c} and Y = {apples, oranges, peaches}, where a, b, and c are distinct, then | X | = | Y | because { (a, apples), (b, oranges), (c, peaches)} is a bijection between the sets X and Y. The cardinality of each of X and Y is 3.
If | X | ≤ | Y |, then there exists Z such that | X | = | Z | and Z ⊆ Y.
If | X | ≤ | Y | and | Y | ≤ | X |, then | X | = | Y |. This holds even for infinite cardinals, and is known as Cantor–Bernstein–Schroeder theorem.
Sets with cardinality of the continuum include the set of all real numbers, the set of all irrational numbers and the interval .
Union and intersection
If A and B are disjoint sets, then
From this, one can show that in general, the cardinalities of unions and intersections are related by the following equation:
Definition of cardinality in class theory (NBG or MK)
Here denote a class of all sets, and denotes the class of all ordinal numbers.
We use the intersection of a class which is defined by , therefore .
In this case
.
This definition allows also obtain a cardinality of any proper class , in particular
This definition is natural since it agrees with the axiom of limitation of size which implies bijection between and any proper class.
| Mathematics | Set theory | null |
6206 | https://en.wikipedia.org/wiki/Computable%20number | Computable number | In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers, computable reals, or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time.
Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.
Informal definition
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1:
The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's state table – is being used to define what is a potentially infinite string of decimal digits.
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
Formal definition
A real number a is computable if it can be approximated by some computable function in the following manner: given any positive integer n, the function produces an integer f(n) such that:
A complex number is called computable if its real and imaginary parts are computable.
Equivalent definitions
There are two similar definitions that are equivalent:
There exists a computable function which, given any positive rational error bound , produces a rational number r such that
There is a computable sequence of rational numbers converging to such that for each i.
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function which when provided with a rational number as input returns or , satisfying the following conditions:
An example is given by a program D that defines the cube root of 3. Assuming this is defined by:
A real number is computable if and only if there is a computable Dedekind cut D corresponding to it. The function D is unique for each computable number (although of course two different programs may provide the same function).
Properties
Not computably enumerable
Assigning a Gödel number to each Turing machine definition produces a subset of the natural numbers corresponding to the computable numbers and identifies a surjection from to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them.
While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number the well ordering principle provides that there is a minimal element in which corresponds to , and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered.
Properties as a field
The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers a and b are computable then the following real numbers are also computable: a + b, a - b, ab, and a/b if b is nonzero.
These operations are actually uniformly computable; for example, there is a Turing machine which on input (A,B,) produces output r, where A is the description of a Turing machine approximating a, B is the description of a Turing machine approximating b, and r is an approximation of a+b.
The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954.
Computable reals however do not form a computable field, because the definition of a computable field requires effective equality.
Non-computability of the ordering
The order relation on the computable numbers is not computable. Let A be the description of a Turing machine approximating the number . Then there is no Turing machine which on input A outputs "YES" if and "NO" if To see why, suppose the machine described by A keeps outputting 0 as approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines A and B approximating numbers and , where , and outputs whether or It is sufficient to use -approximations where so by taking increasingly small (approaching 0), one eventually can decide whether or
Other properties
The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including:
any number that encodes the solution of the halting problem (or any other undecidable problem) according to a chosen encoding scheme.
Chaitin's constant, , which is a type of real number that is Turing equivalent to the halting problem.
Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each universal Turing machine.
A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers.
Digit strings and the Cantor and Baire spaces
Turing's original paper defined computable numbers as follows:
(The decimal expansion of a only refers to the digits following the decimal point.)
Turing was aware that this definition is equivalent to the -approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the sense: if , then the first n digits of the decimal expansion for a provide an approximation of a. For the converse, we pick an computable real number a and generate increasingly precise approximations until the nth digit after the decimal point is certain. This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of (total 0,1 valued functions) instead of reals numbers in . The members of can be identified with binary decimal expansions, but since the decimal expansions and denote the same real number, the interval can only be bijectively (and homeomorphically under the subset topology) identified with the subset of not ending in all 1's.
Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses approximations rather than decimal expansions.
However, from a computability theoretic or measure theoretic perspective, the two structures and are essentially identical. Thus, computability theorists often refer to members of as reals. While is totally disconnected, for questions about classes or randomness it is easier to work in .
Elements of are sometimes called reals as well and though containing a homeomorphic image of , isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the satisfying , with quantifier free, must be computable while the unique satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy.
Use in place of the reals
The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as e, π, and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics.
To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis.
Implementations of exact arithmetic
Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". Modern examples include the CoRN library (Coq), and the RealLib package (C++). A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the package.
| Mathematics | Basics | null |
6207 | https://en.wikipedia.org/wiki/Electric%20current | Electric current | An electric current is a flow of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is defined as the net rate of flow of electric charge through a surface. The moving particles are called charge carriers, which may be one of several types of particles, depending on the conductor. In electric circuits the charge carriers are often electrons moving through a wire. In semiconductors they can be electrons or holes. In an electrolyte the charge carriers are ions, while in plasma, an ionized gas, they are ions and electrons.
In the International System of Units (SI), electric current is expressed in units of ampere (sometimes called an "amp", symbol A), which is equivalent to one coulomb per second. The ampere is an SI base unit and electric current is a base quantity in the International System of Quantities (ISQ). Electric current is also known as amperage and is measured using a device called an ammeter.
Electric currents create magnetic fields, which are used in motors, generators, inductors, and transformers. In ordinary conductors, they cause Joule heating, which creates light in incandescent light bulbs. Time-varying currents emit electromagnetic waves, which are used in telecommunications to broadcast information.
Symbol
The conventional symbol for current is , which originates from the French phrase , (current intensity). Current intensity is often referred to simply as current. The symbol was used by André-Marie Ampère, after whom the unit of electric current is named, in formulating Ampère's force law (1820). The notation travelled from France to Great Britain, where it became standard, although at least one journal did not change from using to until 1896.
Conventions
The conventional direction of current, also known as conventional current, is arbitrarily defined as the direction in which charges flow. In a conductive material, the moving charged particles that constitute the electric current are called charge carriers. In metals, which make up the wires and other conductors in most electrical circuits, the positively charged atomic nuclei of the atoms are held in a fixed position, and the negatively charged electrons are the charge carriers, free to move about in the metal. In other materials, notably the semiconductors, the charge carriers can be positive or negative, depending on the dopant used. Positive and negative charge carriers may even be present at the same time, as happens in an electrolyte in an electrochemical cell.
A flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of either positive or negative charges, or both, a convention is needed for the direction of current that is independent of the type of charge carriers. Negatively charged carriers, such as the electrons (the charge carriers in metal wires and many other electronic circuit components), therefore flow in the opposite direction of conventional current flow in an electrical circuit.
Reference direction
A current in a wire or circuit element can flow in either of two directions. When defining a variable to represent the current, the direction representing positive current must be specified, usually by an arrow on the circuit schematic diagram. This is called the reference direction of the current . When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown until the analysis is completed. Consequently, the reference directions of currents are often assigned arbitrarily. When the circuit is solved, a negative value for the current implies the actual direction of current through that circuit element is opposite that of the chosen reference direction.
Ohm's law
Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference across the two points. Introducing the constant of proportionality, the resistance, one arrives at the usual mathematical equation that describes this relationship:
where I is the current through the conductor in units of amperes, V is the potential difference measured across the conductor in units of volts, and R is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the R in this relation is constant, independent of the current.
Alternating and direct current
In alternating current (AC) systems, the movement of electric charge periodically reverses direction. AC is the form of electric power most commonly delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave, though certain applications use alternative waveforms, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. An important goal in these applications is recovery of information encoded (or modulated) onto the AC signal.
In contrast, direct current (DC) refers to a system in which the movement of electric charge in only one direction (sometimes called unidirectional flow). Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Alternating current can also be converted to direct current through use of a rectifier. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. An old name for direct current was galvanic current.
Occurrences
Natural observable examples of electric current include lightning, static electric discharge, and the solar wind, the source of the polar auroras.
Man-made occurrences of electric current include the flow of conduction electrons in metal wires such as the overhead power lines that deliver electrical energy across long distances and the smaller wires within electrical and electronic equipment. Eddy currents are electric currents that occur in conductors exposed to changing magnetic fields. Similarly, electric currents occur, particularly in the surface, of conductors exposed to electromagnetic waves. When oscillating electric currents flow at the correct voltages within radio antennas, radio waves are generated.
In electronics, other forms of electric current include the flow of electrons through resistors or through the vacuum in a vacuum tube, the flow of ions inside a battery, and the flow of holes within metals and semiconductors.
A biological example of current is the flow of ions in neurons and nerves, responsible for both thought and sensory perception.
Measurement
Current can be measured using an ammeter.
Electric current can be directly measured with a galvanometer, but this method involves breaking the electrical circuit, which is sometimes inconvenient.
Current can also be measured without breaking the circuit by detecting the magnetic field associated with the current.
Devices, at the circuit level, use various techniques to measure current:
Shunt resistors
Hall effect current sensor transducers
Transformers (however DC cannot be measured)
Magnetoresistive field sensors
Rogowski coils
Current clamps
Resistive heating
Joule heating, also known as ohmic heating and resistive heating, is the process of power dissipation by which the passage of an electric current through a conductor increases the internal energy of the conductor, converting thermodynamic work into heat. The phenomenon was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.
This relationship is known as Joule's Law. The SI unit of energy was subsequently named the joule and given the symbol J. The commonly known SI unit of power, the watt (symbol: W), is equivalent to one joule per second.
Electromagnetism
Electromagnet
In an electromagnet a coil of wires behaves like a magnet when an electric current flows through it. When the current is switched off, the coil loses its magnetism immediately.
Electric current produces a magnetic field. The magnetic field can be visualized as a pattern of circular field lines surrounding the wire that persists as long as there is current.
Electromagnetic induction
Magnetic fields can also be used to make electric currents. When a changing magnetic field is applied to a conductor, an electromotive force (EMF) is induced, which starts an electric current, when there is a suitable path.
Radio waves
When an electric current flows in a suitably shaped conductor at radio frequencies, radio waves can be generated. These travel at the speed of light and can cause electric currents in distant conductors.
Conduction mechanisms in various media
In metallic solids, electric charge flows by means of electrons, from lower to higher electrical potential. In other media, any stream of charged objects (ions, for example) may constitute an electric current. To provide a definition of current independent of the type of charge carriers, conventional current is defined as moving in the same direction as the positive charge flow. So, in metals where the charge carriers (electrons) are negative, conventional current is in the opposite direction to the overall electron movement. In conductors where the charge carriers are positive, conventional current is in the same direction as the charge carriers.
In a vacuum, a beam of ions or electrons may be formed. In other conductive materials, the electric current is due to the flow of both positively and negatively charged particles at the same time. In still others, the current is entirely due to positive charge flow. For example, the electric currents in electrolytes are flows of positively and negatively charged ions. In a common lead-acid electrochemical cell, electric currents are composed of positive hydronium ions flowing in one direction, and negative sulfate ions flowing in the other. Electric currents in sparks or plasma are flows of electrons as well as positive and negative ions. In ice and in certain solid electrolytes, the electric current is entirely composed of flowing ions.
Metals
In a metal, some of the outer electrons in each atom are not bound to the individual molecules as they are in molecular solids, or in full bands as they are in insulating materials, but are free to move within the metal lattice. These conduction electrons can serve as charge carriers, carrying a current. Metals are particularly conductive because there are many of these free electrons. With no external electric field applied, these electrons move about randomly due to thermal energy but, on average, there is zero net current within the metal. At room temperature, the average speed of these random motions is 106 metres per second. Given a surface through which a metal wire passes, electrons move in both directions across the surface at an equal rate. As George Gamow wrote in his popular science book, One, Two, Three...Infinity (1947), "The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current."
When a metal wire is connected across the two terminals of a DC voltage source such as a battery, the source places an electric field across the conductor. The moment contact is made, the free electrons of the conductor are forced to drift toward the positive terminal under the influence of this field. The free electrons are therefore the charge carrier in a typical solid conductor.
For a steady flow of charge through a surface, the current I (in amperes) can be calculated with the following equation:
where Q is the electric charge transferred through the surface over a time t. If Q and t are measured in coulombs and seconds respectively, I is in amperes.
More generally, electric current can be represented as the rate at which charge flows through a given surface as:
Electrolytes
Electric currents in electrolytes are flows of electrically charged particles (ions). For example, if an electric field is placed across a solution of Na+ and Cl− (and conditions are right) the sodium ions move towards the negative electrode (cathode), while the chloride ions move towards the positive electrode (anode). Reactions take place at both electrode surfaces, neutralizing each ion.
Water-ice and certain solid electrolytes called proton conductors contain positive hydrogen ions ("protons") that are mobile. In these materials, electric currents are composed of moving protons, as opposed to the moving electrons in metals.
In certain electrolyte mixtures, brightly coloured ions are the moving electric charges. The slow progress of the colour makes the current visible.
Gases and plasmas
In air and other ordinary gases below the breakdown field, the dominant source of electrical conduction is via relatively few mobile ions produced by radioactive gases, ultraviolet light, or cosmic rays. Since the electrical conductivity is low, gases are dielectrics or insulators. However, once the applied electric field approaches the breakdown value, free electrons become sufficiently accelerated by the electric field to create additional free electrons by colliding, and ionizing, neutral gas atoms or molecules in a process called avalanche breakdown. The breakdown process forms a plasma that contains enough mobile electrons and positive ions to make it an electrical conductor. In the process, it forms a light emitting conductive path, such as a spark, arc or lightning.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized" from their molecules or atoms. A plasma can be formed by high temperature, or by application of a high electric or alternating magnetic field as noted above. Due to their lower mass, the electrons in a plasma accelerate more quickly in response to an electric field than the heavier positive ions, and hence carry the bulk of the current. The free ions recombine to create new chemical compounds (for example, breaking atmospheric oxygen into single oxygen [O2 → 2O], which then recombine creating ozone [O3]).
Vacuum
Since a "perfect vacuum" contains no charged particles, it normally behaves as a perfect insulator. However, metal electrode surfaces can cause a region of the vacuum to become conductive by injecting free electrons or ions through either field electron emission or thermionic emission. Thermionic emission occurs when the thermal energy exceeds the metal's work function, while field electron emission occurs when the electric field at the surface of the metal is high enough to cause tunneling, which results in the ejection of free electrons from the metal into the vacuum. Externally heated electrodes are often used to generate an electron cloud as in the filament or indirectly heated cathode of vacuum tubes. Cold electrodes can also spontaneously produce electron clouds via thermionic emission when small incandescent regions (called cathode spots or anode spots) are formed. These are incandescent regions of the electrode surface that are created by a localized high current. These regions may be initiated by field electron emission, but are then sustained by localized thermionic emission once a vacuum arc forms. These small electron-emitting regions can form quite rapidly, even explosively, on a metal surface subjected to a high electrical field. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
Superconductivity
Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.
Semiconductor
In a semiconductor it is sometimes useful to think of the current as due to the flow of positive "holes" (the mobile positive charge carriers that are places where the semiconductor crystal is missing a valence electron). This is the case in a p-type semiconductor. A semiconductor has electrical conductivity intermediate in magnitude between that of a conductor and an insulator. This means a conductivity roughly in the range of 10−2 to 104 siemens per centimeter (S⋅cm−1).
In the classic crystalline semiconductors, electrons can have energies only within certain bands (i.e. ranges of levels of energy). Energetically, these bands are located between the energy of the ground state, the state in which electrons are tightly bound to the atomic nuclei of the material, and the free electron energy, the latter describing the energy required for an electron to escape entirely from the material. The energy bands each correspond to many discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are occupied, up to a particular band called the valence band. Semiconductors and insulators are distinguished from metals because the valence band in any given metal is nearly filled with electrons under usual operating conditions, while very few (semiconductor) or virtually none (insulator) of them are available in the conduction band, the band immediately above the valence band.
The ease of exciting electrons in the semiconductor from the valence band to the conduction band depends on the band gap between the bands. The size of this energy band gap serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
With covalent bonds, an electron moves by hopping to a neighboring bond. The Pauli exclusion principle requires that the electron be lifted into the higher anti-bonding state of that bond. For delocalized states, for example in one dimensionthat is in a nanowire, for every energy there is a state with electrons flowing in one direction and another state with the electrons flowing in the other. For a net current to flow, more states for one direction than for the other direction must be occupied. For this to occur, energy is required, as in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as a semiconductor's temperature rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and on exciting electrons into the conduction band. The current-carrying electrons in the conduction band are known as free electrons, though they are often simply called electrons if that is clear in context.
Current density and Ohm's law
Current density is the rate at which charge passes through a chosen unit area. It is defined as a vector whose magnitude is the current per unit cross-sectional area. As discussed in Reference direction, the direction is arbitrary. Conventionally, if the moving charges are positive, then the current density has the same sign as the velocity of the charges. For negative charges, the sign of the current density is opposite to the velocity of the charges. In SI units, current density (symbol: j) is expressed in the SI base units of amperes per square metre.
In linear materials such as metals, and under low frequencies, the current density across the conductor surface is uniform. In such conditions, Ohm's law states that the current is directly proportional to the potential difference between two ends (across) of that metal (ideal) resistor (or other ohmic device):
where is the current, measured in amperes; is the potential difference, measured in volts; and is the resistance, measured in ohms. For alternating currents, especially at higher frequencies, skin effect causes the current to spread unevenly across the conductor cross-section, with higher density near the surface, thus increasing the apparent resistance.
Drift speed
The mobile charged particles within a conductor move constantly in random directions, like the particles of a gas. (More accurately, a Fermi gas.) To create a net flow of charge, the particles must also move together with an average drift rate. Electrons are the charge carriers in most metals and they follow an erratic path, bouncing from atom to atom, but generally drifting in the opposite direction of the electric field. The speed they drift at can be calculated from the equation:
where
is the electric current
is number of charged particles per unit volume (or charge carrier density)
is the cross-sectional area of the conductor
is the drift velocity, and
is the charge on each particle.
Typically, electric charges in solids flow slowly. For example, in a copper wire of cross-section 0.5 mm2, carrying a current of 5 A, the drift velocity of the electrons is on the order of a millimetre per second. To take a different example, in the near-vacuum inside a cathode-ray tube, the electrons travel in near-straight lines at about a tenth of the speed of light.
Any accelerating electric charge, and therefore any changing electric current, gives rise to an electromagnetic wave that propagates at very high speed outside the surface of the conductor. This speed is usually a significant fraction of the speed of light, as can be deduced from Maxwell's equations, and is therefore many times faster than the drift velocity of the electrons. For example, in AC power lines, the waves of electromagnetic energy propagate through the space between the wires, moving from a source to a distant load, even though the electrons in the wires only move back and forth over a tiny distance.
The ratio of the speed of the electromagnetic wave to the speed of light in free space is called the velocity factor, and depends on the electromagnetic properties of the conductor and the insulating materials surrounding it, and on their shape and size.
The magnitudes (not the natures) of these three velocities can be illustrated by an analogy with the three similar velocities associated with gases. ( | Physical sciences | Electrical circuits | null |
6220 | https://en.wikipedia.org/wiki/Circle | Circle | A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. The distance between any point of the circle and the centre is called the radius. The length of a line segment connecting two points on the circle and passing through the centre is called the diameter. A circle bounds a region of the plane called a disc.
The circle has been known since before the beginning of recorded history. Natural circles are common, such as the full moon or a slice of round fruit. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus.
Terminology
Annulus: a ring-shaped object, the region bounded by two concentric circles.
Arc: any connected part of a circle. Specifying two end points of an arc and a centre allows for two arcs that together make up a full circle.
Centre: the point equidistant from all points on the circle.
Chord: a line segment whose endpoints lie on the circle, thus dividing a circle into two segments.
Circumference: the length of one circuit along the circle, or the distance around the circle.
Diameter: a line segment whose endpoints lie on the circle and that passes through the centre; or the length of such a line segment. This is the largest distance between any two points on the circle. It is a special case of a chord, namely the longest chord for a given circle, and its length is twice the length of a radius.
Disc: the region of the plane bounded by a circle. In strict mathematical usage, a circle is only the boundary of the disc (or disk), while in everyday use the term "circle" may also refer to a disc.
Lens: the region common to (the intersection of) two overlapping discs.
Radius: a line segment joining the centre of a circle with any single point on the circle itself; or the length of such a segment, which is half (the length of) a diameter. Usually, the radius is denoted and required to be a positive number. A circle with is a degenerate case consisting of a single point.
Sector: a region bounded by two radii of equal length with a common centre and either of the two possible arcs, determined by this centre and the endpoints of the radii.
Segment: a region bounded by a chord and one of the arcs connecting the chord's endpoints. The length of the chord imposes a lower boundary on the diameter of possible arcs. Sometimes the term segment is used only for regions not containing the centre of the circle to which their arc belongs.
Secant: an extended chord, a coplanar straight line, intersecting a circle in two points.
Semicircle: one of the two possible arcs determined by the endpoints of a diameter, taking its midpoint as centre. In non-technical common usage it may mean the interior of the two-dimensional region bounded by a diameter and one of its arcs, that is technically called a half-disc. A half-disc is a special case of a segment, namely the largest one.
Tangent: a coplanar straight line that has one single point in common with a circle ("touches the circle at this point").
All of the specified regions may be considered as open, that is, not containing their boundaries, or as closed, including their respective boundaries.
Etymology
The word circle derives from the Greek κίρκος/κύκλος (kirkos/kuklos), itself a metathesis of the Homeric Greek κρίκος (krikos), meaning "hoop" or "ring". The origins of the words circus and circuit are closely related.
History
Prehistoric people made stone circles and timber circles, and circular elements are common in petroglyphs and cave paintings. Disc-shaped prehistoric artifacts include the Nebra sky disc and jade discs called Bi.
The Egyptian Rhind papyrus, dated to 1700 BCE, gives a method to find the area of a circle. The result corresponds to (3.16049...) as an approximate value of .
Book 3 of Euclid's Elements deals with the properties of circles. Euclid's definition of a circle is:
In Plato's Seventh Letter there is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation. Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles.
In 1880 CE, Ferdinand von Lindemann proved that is transcendental, proving that the millennia-old problem of squaring the circle cannot be performed with straightedge and compass.
With the advent of abstract art in the early 20th century, geometric objects became an artistic subject in their own right. Wassily Kandinsky in particular often used circles as an element of his compositions.
Symbolism and religious use
From the time of the earliest known civilisations – such as the Assyrians and ancient Egyptians, those in the Indus Valley and along the Yellow River in China, and the Western civilisations of ancient Greece and Rome during classical Antiquity – the circle has been used directly or indirectly in visual art to convey the artist's message and to express certain ideas.
However, differences in worldview (beliefs and culture) had a great impact on artists' perceptions. While some emphasised the circle's perimeter to demonstrate their democratic manifestation, others focused on its centre to symbolise the concept of cosmic unity. In mystical doctrines, the circle mainly symbolises the infinite and cyclical nature of existence, but in religious traditions it represents heavenly bodies and divine spirits.
The circle signifies many sacred and spiritual concepts, including unity, infinity, wholeness, the universe, divinity, balance, stability and perfection, among others. Such concepts have been conveyed in cultures worldwide through the use of symbols, for example, a compass, a halo, the vesica piscis and its derivatives (fish, eye, aureole, mandorla, etc.), the ouroboros, the Dharma wheel, a rainbow, mandalas, rose windows and so forth. Magic circles are part of some traditions of Western esotericism.
Analytic results
Circumference
The ratio of a circle's circumference to its diameter is (pi), an irrational constant approximately equal to 3.141592654. The ratio of a circle's circumference to its radius is . Thus the circumference C is related to the radius r and diameter d by:
Area enclosed
As proved by Archimedes, in his Measurement of a Circle, the area enclosed by a circle is equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, which comes to multiplied by the radius squared:
Equivalently, denoting diameter by d,
that is, approximately 79% of the circumscribing square (whose side is of length d).
The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality.
Radian
If a circle of radius is centred at the vertex of an angle, and that angle intercepts an arc of the circle with an arc length of , then the radian measure of the angle is the ratio of the arc length to the radius:
The circular arc is said to subtend the angle, known as the central angle, at the centre of the circle. The angle subtended by a complete circle at its centre is a complete angle, which measures radians, 360 degrees, or one turn.
Using radians, the formula for the arc length of a circular arc of radius and subtending a central angle of measure is
and the formula for the area of a circular sector of radius and with central angle of measure is
In the special case , these formulae yield the circumference of a complete circle and area of a complete disc, respectively.
Equations
Cartesian coordinates
Equation of a circle
In an x–y Cartesian coordinate system, the circle with centre coordinates (a, b) and radius r is the set of all points (x, y) such that
This equation, known as the equation of the circle, follows from the Pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |x − a| and |y − b|. If the circle is centred at the origin (0, 0), then the equation simplifies to
One coordinate as a function of the other
The circle of radius with center at in the – plane can be broken into two semicircles each of which is the graph of a function, and , respectively:
for values of ranging from to .
Parametric form
The equation can be written in parametric form using the trigonometric functions sine and cosine as
where t is a parametric variable in the range 0 to 2, interpreted geometrically as the angle that the ray from (a, b) to (x, y) makes with the positive x axis.
An alternative parametrisation of the circle is
In this parameterisation, the ratio of t to r can be interpreted geometrically as the stereographic projection of the line passing through the centre parallel to the x axis (see Tangent half-angle substitution). However, this parameterisation works only if t is made to range not only through all reals but also to a point at infinity; otherwise, the leftmost point of the circle would be omitted.
3-point form
The equation of the circle determined by three points not on a line is obtained by a conversion of the 3-point form of a circle equation:
Homogeneous form
In homogeneous coordinates, each conic section with the equation of a circle has the form
It can be proven that a conic section is a circle exactly when it contains (when extended to the complex projective plane) the points I(1: i: 0) and J(1: −i: 0). These points are called the circular points at infinity.
Polar coordinates
In polar coordinates, the equation of a circle is
where a is the radius of the circle, are the polar coordinates of a generic point on the circle, and are the polar coordinates of the centre of the circle (i.e., r0 is the distance from the origin to the centre of the circle, and φ is the anticlockwise angle from the positive x axis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e. , this reduces to . When , or when the origin lies on the circle, the equation becomes
In the general case, the equation can be solved for r, giving
Without the ± sign, the equation would in some cases describe only half a circle.
Complex plane
In the complex plane, a circle with a centre at c and radius r has the equation
In parametric form, this can be written as
The slightly generalised equation
for real p, q and complex g is sometimes called a generalised circle. This becomes the above equation for a circle with , since . Not all generalised circles are actually circles: a generalised circle is either a (true) circle or a line.
Tangent lines
The tangent line through a point P on the circle is perpendicular to the diameter passing through P. If and the circle has centre (a, b) and radius r, then the tangent line is perpendicular to the line from (a, b) to (x1, y1), so it has the form . Evaluating at (x1, y1) determines the value of c, and the result is that the equation of the tangent is
or
If , then the slope of this line is
This can also be found using implicit differentiation.
When the centre of the circle is at the origin, then the equation of the tangent line becomes
and its slope is
Properties
The circle is the shape with the largest area for a given length of perimeter (see Isoperimetric inequality).
The circle is a highly symmetric shape: every line through the centre forms a line of reflection symmetry, and it has rotational symmetry around the centre for every angle. Its symmetry group is the orthogonal group O(2,R). The group of rotations alone is the circle group T.
All circles are similar.
A circle circumference and radius are proportional.
The area enclosed and the square of its radius are proportional.
The constants of proportionality are 2 and respectively.
The circle that is centred at the origin with radius 1 is called the unit circle.
Thought of as a great circle of the unit sphere, it becomes the Riemannian circle.
Through any three points, not all on the same line, there lies a unique circle. In Cartesian coordinates, it is possible to give explicit formulae for the coordinates of the centre of the circle and the radius in terms of the coordinates of the three given points. See circumcircle.
Chord
Chords are equidistant from the centre of a circle if and only if they are equal in length.
The perpendicular bisector of a chord passes through the centre of a circle; equivalent statements stemming from the uniqueness of the perpendicular bisector are:
A perpendicular line from the centre of a circle bisects the chord.
The line segment through the centre bisecting a chord is perpendicular to the chord.
If a central angle and an inscribed angle of a circle are subtended by the same chord and on the same side of the chord, then the central angle is twice the inscribed angle.
If two angles are inscribed on the same chord and on the same side of the chord, then they are equal.
If two angles are inscribed on the same chord and on opposite sides of the chord, then they are supplementary.
For a cyclic quadrilateral, the exterior angle is equal to the interior opposite angle.
An inscribed angle subtended by a diameter is a right angle (see Thales' theorem).
The diameter is the longest chord of the circle.
Among all the circles with a chord AB in common, the circle with minimal radius is the one with diameter AB.
If the intersection of any two chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then .
If the intersection of any two perpendicular chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then equals the square of the diameter.
The sum of the squared lengths of any two chords intersecting at right angles at a given point is the same as that of any other two perpendicular chords intersecting at the same point and is given by 8r2 − 4p2, where r is the circle radius, and p is the distance from the centre point to the point of intersection.
The distance from a point on the circle to a given chord times the diameter of the circle equals the product of the distances from the point to the ends of the chord.
Tangent
A line drawn perpendicular to a radius through the end point of the radius lying on the circle is a tangent to the circle.
A line drawn perpendicular to a tangent through the point of contact with a circle passes through the centre of the circle.
Two tangents can always be drawn to a circle from any point outside the circle, and these tangents are equal in length.
If a tangent at A and a tangent at B intersect at the exterior point P, then denoting the centre as O, the angles ∠BOA and ∠BPA are supplementary.
If AD is tangent to the circle at A and if AQ is a chord of the circle, then .
Theorems
The chord theorem states that if two chords, CD and EB, intersect at A, then .
If two secants, AE and AD, also cut the circle at B and C respectively, then (corollary of the chord theorem).
A tangent can be considered a limiting case of a secant whose ends are coincident. If a tangent from an external point A meets the circle at F and a secant from the external point A meets the circle at C and D respectively, then (tangent–secant theorem).
The angle between a chord and the tangent at one of its endpoints is equal to one half the angle subtended at the centre of the circle, on the opposite side of the chord (tangent chord angle).
If the angle subtended by the chord at the centre is 90°, then , where ℓ is the length of the chord, and r is the radius of the circle.
If two secants are inscribed in the circle as shown at right, then the measurement of angle A is equal to one half the difference of the measurements of the enclosed arcs ( and ). That is, , where O is the centre of the circle (secant–secant theorem).
Inscribed angles
An inscribed angle (examples are the blue and green angles in the figure) is exactly half the corresponding central angle (red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle (since the central angle is 180°).
Sagitta
The sagitta (also known as the versine) is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the arc of the circle.
Given the length y of a chord and the length x of the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle that will fit around the two lines:
Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of length y and with sagitta of length x, since the sagitta intersects the midpoint of the chord, we know that it is a part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is () in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (. Solving for r, we find the required result.
Compass and straightedge constructions
There are many compass-and-straightedge constructions resulting in circles.
The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of the compass on the centre point, the movable leg on the point on the circle and rotate the compass.
Construction with given diameter
Construct the midpoint of the diameter.
Construct the circle with centre passing through one of the endpoints of the diameter (it will also pass through the other endpoint).
Construction through three noncollinear points
Name the points , and ,
Construct the perpendicular bisector of the segment .
Construct the perpendicular bisector of the segment .
Label the point of intersection of these two perpendicular bisectors . (They meet because the points are not collinear).
Construct the circle with centre passing through one of the points , or (it will also pass through the other two points).
Circle of Apollonius
Apollonius of Perga showed that a circle may also be defined as the set of points in a plane having a constant ratio (other than 1) of distances to two fixed foci, A and B. (The set of points where the distances are equal is the perpendicular bisector of segment AB, a line.) That circle is sometimes said to be drawn about two points.
The proof is in two parts. First, one must prove that, given two foci A and B and a ratio of distances, any point P satisfying the ratio of distances must fall on a particular circle. Let C be another point, also satisfying the ratio and lying on segment AB. By the angle bisector theorem the line segment PC will bisect the interior angle APB, since the segments are similar:
Analogously, a line segment PD through some point D on AB extended bisects the corresponding exterior angle BPQ where Q is on AP extended. Since the interior and exterior angles sum to 180 degrees, the angle CPD is exactly 90 degrees; that is, a right angle. The set of points P such that angle CPD is a right angle forms a circle, of which CD is a diameter.
Second, see for a proof that every point on the indicated circle satisfies the given ratio.
Cross-ratios
A closely related property of circles involves the geometry of the cross-ratio of points in the complex plane. If A, B, and C are as above, then the circle of Apollonius for these three points is the collection of points P for which the absolute value of the cross-ratio is equal to one:
Stated another way, P is a point on the circle of Apollonius if and only if the cross-ratio is on the unit circle in the complex plane.
Generalised circles
If C is the midpoint of the segment AB, then the collection of points P satisfying the Apollonius condition
is not a circle, but rather a line.
Thus, if A, B, and C are given distinct points in the plane, then the locus of points P satisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius.
Inscription in or circumscription about other figures
In every triangle a unique circle, called the incircle, can be inscribed such that it is tangent to each of the three sides of the triangle.
About every triangle a unique circle, called the circumcircle, can be circumscribed such that it goes through each of the triangle's three vertices.
A tangential polygon, such as a tangential quadrilateral, is any convex polygon within which a circle can be inscribed that is tangent to each side of the polygon. Every regular polygon and every triangle is a tangential polygon.
A cyclic polygon is any convex polygon about which a circle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called a bicentric polygon.
A hypocycloid is a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle.
Limiting case of other figures
The circle can be viewed as a limiting case of various other figures:
The series of regular polygons with n sides has the circle as its limit as n approaches infinity. This fact was applied by Archimedes to approximate π.
A Cartesian oval is a set of points such that a weighted sum of the distances from any of its points to two fixed points (foci) is a constant. An ellipse is the case in which the weights are equal. A circle is an ellipse with an eccentricity of zero, meaning that the two foci coincide with each other as the centre of the circle. A circle is also a different special case of a Cartesian oval in which one of the weights is zero.
A superellipse has an equation of the form for positive a, b, and n. A supercircle has . A circle is the special case of a supercircle in which .
A Cassini oval is a set of points such that the product of the distances from any of its points to two fixed points is a constant. When the two fixed points coincide, a circle results.
A curve of constant width is a figure whose width, defined as the perpendicular distance between two distinct parallel lines each intersecting its boundary in a single point, is the same regardless of the direction of those two parallel lines. The circle is the simplest example of this type of figure.
Locus of constant sum
Consider a finite set of points in the plane. The locus of points such that the sum of the squares of the distances to the given points is constant is a circle, whose centre is at the centroid of the given points.
A generalisation for higher powers of distances is obtained if under points the vertices of the regular polygon are taken. The locus of points such that the sum of the -th power of distances to the vertices of a given regular polygon with circumradius is constant is a circle, if
whose centre is the centroid of the .
In the case of the equilateral triangle, the loci of the constant sums of the second and fourth powers are circles, whereas for the square, the loci are circles for the constant sums of the second, fourth, and sixth powers. For the regular pentagon the constant sum of the eighth powers of the distances will be added and so forth.
Squaring the circle
Squaring the circle is the problem, proposed by ancient geometers, of constructing a square with the same area as a given circle by using only a finite number of steps with compass and straightedge.
In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients. Despite the impossibility, this topic continues to be of interest for pseudomath enthusiasts.
Generalisations
In other p-norms
Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. In p-norm, distance is determined by
In Euclidean geometry, p = 2, giving the familiar
In taxicab geometry, p = 1. Taxicab circles are squares with sides oriented at a 45° angle to the coordinate axes. While each side would have length using a Euclidean metric, where r is the circle's radius, its length in taxicab geometry is 2r. Thus, a circle's circumference is 8r. Thus, the value of a geometric analog to is 4 in this geometry. The formula for the unit circle in taxicab geometry is in Cartesian coordinates and
in polar coordinates.
A circle of radius 1 (using this distance) is the von Neumann neighborhood of its centre.
A circle of radius r for the Chebyshev distance (L∞ metric) on a plane is also a square with side length 2r parallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between L1 and L∞ metrics does not generalise to higher dimensions.
Topological definition
The circle is the one-dimensional hypersphere (the 1-sphere).
In topology, a circle is not limited to the geometric concept, but to all of its homeomorphisms. Two topological circles are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy).
Specially named circles
Apollonian circles
Archimedean circle
Archimedes' twin circles
Bankoff circle
Carlyle circle
Chromatic circle
Circle of antisimilitude
Ford circle
Geodesic circle
Johnson circles
Schoch circles
Woo circles
Of a triangle
Apollonius circle of the excircles
Brocard circle
Excircle
Incircle
Lemoine circle
Lester circle
Malfatti circles
Mandart circle
Nine-point circle
Orthocentroidal circle
Parry circle
Polar circle (geometry)
Spieker circle
Van Lamoen circle
Of certain quadrilaterals
Eight-point circle of an orthodiagonal quadrilateral
Of a conic section
Director circle
Directrix circle
Of a torus
Villarceau circles
| Mathematics | Geometry and topology | null |
6229 | https://en.wikipedia.org/wiki/Colossus%20computer | Colossus computer | Colossus was a set of computers developed by British codebreakers in the years 1943–1945 to help in the cryptanalysis of the Lorenz cipher. Colossus used thermionic valves (vacuum tubes) to perform Boolean and counting operations. Colossus is thus regarded as the world's first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.
Colossus was designed by General Post Office (GPO) research telephone engineer Tommy Flowers based on plans developed by mathematician Max Newman at the Government Code and Cypher School (GC&CS) at Bletchley Park.
Alan Turing's use of probability in cryptanalysis (see Banburismus) contributed to its design. It has sometimes been erroneously stated that Turing designed Colossus to aid the cryptanalysis of the Enigma. (Turing's machine that helped decode Enigma was the electromechanical Bombe, not Colossus.)
The prototype, Colossus Mark 1, was shown to be working in December 1943 and was in use at Bletchley Park by early 1944. An improved Colossus Mark 2 that used shift registers to run five times faster first worked on 1 June 1944, just in time for the Normandy landings on D-Day. Ten Colossi were in use by the end of the war and an eleventh was being commissioned. Bletchley Park's use of these machines allowed the Allies to obtain a vast amount of high-level military intelligence from intercepted radiotelegraphy messages between the German High Command (OKW) and their army commands throughout occupied Europe.
The existence of the Colossus machines was kept secret until the mid-1970s. All but two machines were dismantled into such small parts that their use could not be inferred. The two retained machines were eventually dismantled in the 1960s. In January 2024, new photos were released by GCHQ that showed re-engineered Colossus in a very different environment from the Bletchley Park buildings, presumably at GCHQ Cheltenham. A functioning reconstruction of a Mark 2 Colossus was completed in 2008 by Tony Sale and a team of volunteers; it is on display in The National Museum of Computing at Bletchley Park.
Purpose and origins
The Colossus computers were used to help decipher intercepted radio teleprinter messages that had been encrypted using an unknown device. Intelligence information revealed that the Germans called the wireless teleprinter transmission systems "Sägefisch" (sawfish). This led the British to call encrypted German teleprinter traffic "Fish", and the unknown machine and its intercepted messages "Tunny" (tunafish).
Before the Germans increased the security of their operating procedures, British cryptanalysts diagnosed how the unseen machine functioned and built an imitation of it called "British Tunny".
It was deduced that the machine had twelve wheels and used a Vernam ciphering technique on message characters in the standard 5-bit ITA2 telegraph code. It did this by combining the plaintext characters with a stream of key characters using the XOR Boolean function to produce the ciphertext.
In August 1941, a blunder by German operators led to the transmission of two versions of the same message with identical machine settings. These were intercepted and worked on at Bletchley Park. First, John Tiltman, a very talented GC&CS cryptanalyst, derived a keystream of almost 4000 characters. Then Bill Tutte, a newly arrived member of the Research Section, used this keystream to work out the logical structure of the Lorenz machine. He deduced that the twelve wheels consisted of two groups of five, which he named the χ (chi) and ψ (psi) wheels, the remaining two he called μ (mu) or "motor" wheels. The chi wheels stepped regularly with each letter that was encrypted, while the psi wheels stepped irregularly, under the control of the motor wheels.
With a sufficiently random keystream, a Vernam cipher removes the natural language property of a plaintext message of having an uneven frequency distribution of the different characters, to produce a uniform distribution in the ciphertext. The Tunny machine did this well. However, the cryptanalysts worked out that by examining the frequency distribution of the character-to-character changes in the ciphertext, instead of the plain characters, there was a departure from uniformity which provided a way into the system. This was achieved by "differencing" in which each bit or character was XOR-ed with its successor. After Germany surrendered, allied forces captured a Tunny machine and discovered that it was the electromechanical Lorenz SZ (Schlüsselzusatzgerät, cipher attachment) in-line cipher machine.
In order to decrypt the transmitted messages, two tasks had to be performed. The first was "wheel breaking", which was the discovery of the cam patterns for all the wheels. These patterns were set up on the Lorenz machine and then used for a fixed period of time for a succession of different messages. Each transmission, which often contained more than one message, was enciphered with a different start position of the wheels. Alan Turing invented a method of wheel-breaking that became known as Turingery. Turing's technique was further developed into "Rectangling", for which Colossus could produce tables for manual analysis. Colossi 2, 4, 6, 7 and 9 had a "gadget" to aid this process.
The second task was "wheel setting", which worked out the start positions of the wheels for a particular message and could only be attempted once the cam patterns were known. It was this task for which Colossus was initially designed. To discover the start position of the chi wheels for a message, Colossus compared two character streams, counting statistics from the evaluation of programmable Boolean functions. The two streams were the ciphertext, which was read at high speed from a paper tape, and the keystream, which was generated internally, in a simulation of the unknown German machine. After a succession of different Colossus runs to discover the likely chi-wheel settings, they were checked by examining the frequency distribution of the characters in the processed ciphertext. Colossus produced these frequency counts.
Decryption processes
By using differencing and knowing that the psi wheels did not advance with each character, Tutte worked out that trying just two differenced bits (impulses) of the chi-stream against the differenced ciphertext would produce a statistic that was non-random. This became known as Tutte's "1+2 break in". It involved calculating the following Boolean function:
and counting the number of times it yielded "false" (zero). If this number exceeded a pre-defined threshold value known as the "set total", it was printed out. The cryptanalyst would examine the printout to determine which of the putative start positions was most likely to be the correct one for the chi-1 and chi-2 wheels.
This technique would then be applied to other pairs of, or single, impulses to determine the likely start position of all five chi wheels. From this, the de-chi (D) of a ciphertext could be obtained, from which the psi component could be removed by manual methods. If the frequency distribution of characters in the de-chi version of the ciphertext was within certain bounds, "wheel setting" of the chi wheels was considered to have been achieved, and the message settings and de-chi were passed to the "Testery". This was the section at Bletchley Park led by Major Ralph Tester where the bulk of the decrypting work was done by manual and linguistic methods.
Colossus could also derive the start position of the psi and motor wheels. The feasibility of utilizing this additional capability regularly was made possible in the last few months of the war when there were plenty of Colossi available and the number of Tunny messages had declined.
Design and construction
Colossus was developed for the "Newmanry", the section headed by the mathematician Max Newman that was responsible for machine methods against the twelve-rotor Lorenz SZ40/42 on-line teleprinter cipher machine (code-named Tunny, for tunafish). The Colossus design arose out of a parallel project that produced a less-ambitious counting machine dubbed "Heath Robinson". Although the Heath Robinson machine proved the concept of machine analysis for this part of the process, it had serious limitations. The electro-mechanical parts were relatively slow and it was difficult to synchronise two looped paper tapes, one containing the enciphered message, and the other representing part of the keystream of the Lorenz machine. Also the tapes tended to stretch and break when being read at up to 2000 characters per second.
Tommy Flowers MBE was a senior electrical engineer and Head of the Switching Group at the Post Office Research Station at Dollis Hill. Prior to his work on Colossus, he had been involved with GC&CS at Bletchley Park from February 1941 in an attempt to improve the Bombes that were used in the cryptanalysis of the German Enigma cipher machine. He was recommended to Max Newman by Alan Turing, who had been impressed by his work on the Bombes. The main components of the Heath Robinson machine were as follows.
A tape transport and reading mechanism that ran the looped key and message tapes at between 1000 and 2000 characters per second.
A combining unit that implemented the logic of Tutte's method.
A counting unit that had been designed by C. E. Wynn-Williams of the Telecommunications Research Establishment (TRE) at Malvern, which counted the number of times the logical function returned a specified truth value.
Flowers had been brought in to design the Heath Robinson's combining unit. He was not impressed by the system of a key tape that had to be kept synchronised with the message tape and, on his own initiative, he designed an electronic machine which eliminated the need for the key tape by having an electronic analogue of the Lorenz (Tunny) machine. He presented this design to Max Newman in February 1943, but the idea that the one to two thousand thermionic valves (vacuum tubes and thyratrons) proposed, could work together reliably, was greeted with great scepticism, so more Robinsons were ordered from Dollis Hill. Flowers, however, knew from his pre-war work that most thermionic valve failures occurred as a result of the thermal stresses at power-up, so not powering a machine down reduced failure rates to very low levels. Additionally, if the heaters were started at a low voltage then slowly brought up to full voltage, thermal stress was reduced. The valves themselves could be soldered-in to avoid problems with plug-in bases, which could be unreliable. Flowers persisted with the idea and obtained support from the Director of the Research Station, W Gordon Radley.
Flowers and his team of some fifty people in the switching group spent eleven months from early February 1943 designing and building a machine that dispensed with the second tape of the Heath Robinson, by generating the wheel patterns electronically. Flowers used some of his own money for the project. This prototype, Mark 1 Colossus, contained 1,600 thermionic valves (tubes). It performed satisfactorily at Dollis Hill on 8 December 1943 and was dismantled and shipped to Bletchley Park, where it was delivered on 18 January and re-assembled by Harry Fensom and Don Horwood. It was operational in January and it successfully attacked its first message on 5 February 1944. It was a large structure and was dubbed 'Colossus'. A memo held in the National Archives written by Max Newman on 18 January 1944 records that "Colossus arrives today".
During the development of the prototype, an improved design had been developed – the Mark 2 Colossus. Four of these were ordered in March 1944 and by the end of April the number on order had been increased to twelve. Dollis Hill was put under pressure to have the first of these working by 1 June. Allen Coombs took over leadership of the production Mark 2 Colossi, the first of which – containing 2,400 valves – became operational at 08:00 on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day. Subsequently, Colossi were delivered at the rate of about one a month. By the time of V-E Day there were ten Colossi working at Bletchley Park and a start had been made on assembling an eleventh. Seven of the Colossi were used for 'wheel setting' and three for 'wheel breaking'.
The main units of the Mark 2 design were as follows.
A tape transport with an 8-photocell reading mechanism.
A six character FIFO shift register.
Twelve thyratron ring stores that simulated the Lorenz machine generating a bit-stream for each wheel.
Panels of switches for specifying the program and the "set total".
A set of functional units that performed Boolean operations.
A "span counter" that could suspend counting for part of the tape.
A master control that handled clocking, start and stop signals, counter readout and printing.
Five electronic counters.
An electric typewriter.
Most of the design of the electronics was the work of Tommy Flowers, assisted by William Chandler, Sidney Broadhurst and Allen Coombs; with Erie Speight and Arnold Lynch developing the photoelectric reading mechanism. Coombs remembered Flowers, having produced a rough draft of his design, tearing it into pieces that he handed out to his colleagues for them to do the detailed design and get their team to manufacture it. The Mark 2 Colossi were both five times faster and were simpler to operate than the prototype.
Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal storage for the data. The design overcame the problem of synchronizing the electronics with the speed of the message tape by generating a clock signal from reading its sprocket holes. The speed of operation was thus limited by the mechanics of reading the tape. During development, the tape reader was tested up to 9700 characters per second (53 mph) before the tape disintegrated. So 5000 characters/second () was settled on as the speed for regular use. Flowers designed a 6-character shift register, which was used both for computing the delta function (ΔZ) and for testing five different possible starting points of Tunny's wheels in the five processors. This five-way parallelism enabled five simultaneous tests and counts to be performed giving an effective processing speed of 25,000 characters per second. The computation used algorithms devised by W. T. Tutte and colleagues to decrypt a Tunny message.
Operation
The Newmanry was staffed by cryptanalysts, operators from the Women's Royal Naval Service (WRNS) – known as "Wrens" – and engineers who were permanently on hand for maintenance and repair. By the end of the war the staffing was 272 Wrens and 27 men.
The first job in operating Colossus for a new message was to prepare the paper tape loop. This was performed by the Wrens who stuck the two ends together using Bostik glue, ensuring that there was a 150-character length of blank tape between the end and the start of the message. Using a special hand punch they inserted a start hole between the third and fourth channels sprocket holes from the end of the blank section, and a stop hole between the fourth and fifth channels sprocket holes from the end of the characters of the message. These were read by specially positioned photocells and indicated when the message was about to start and when it ended. The operator would then thread the paper tape through the gate and around the pulleys of the bedstead and adjust the tension. The two-tape bedstead design had been carried on from Heath Robinson so that one tape could be loaded whilst the previous one was being run. A switch on the Selection Panel specified the "near" or the "far" tape.
After performing various resetting and zeroizing tasks, the Wren operators would, under instruction from the cryptanalyst, operate the "set total" decade switches and the K2 panel switches to set the desired algorithm. They would then start the bedstead tape motor and lamp and, when the tape was up to speed, operate the master start switch.
Programming
Howard Campaigne, a mathematician and cryptanalyst from the US Navy's OP-20-G, wrote the following in a foreword to Flowers' 1983 paper "The Design of Colossus".
Colossus was not a stored-program computer. The input data for the five parallel processors was read from the looped message paper tape and the electronic pattern generators for the chi, psi and motor wheels. The programs for the processors were set and held on the switches and jack panel connections. Each processor could evaluate a Boolean function and count and display the number of times it yielded the specified value of "false" (0) or "true" (1) for each pass of the message tape.
Input to the processors came from two sources, the shift registers from tape reading and the thyratron rings that emulated the wheels of the Tunny machine. The characters on the paper tape were called Z and the characters from the Tunny emulator were referred to by the Greek letters that Bill Tutte had given them when working out the logical structure of the machine. On the selection panel, switches specified either Z or ΔZ, either or Δ and either or Δ for the data to be passed to the jack field and 'K2 switch panel'. These signals from the wheel simulators could be specified as stepping on with each new pass of the message tape or not.
The K2 switch panel had a group of switches on the left-hand side to specify the algorithm. The switches on the right-hand side selected the counter to which the result was fed. The plugboard allowed less specialized conditions to be imposed. Overall the K2 switch panel switches and the plugboard allowed about five billion different combinations of the selected variables.
As an example: a set of runs for a message tape might initially involve two chi wheels, as in Tutte's 1+2 algorithm. Such a two-wheel run was called a long run, taking on average eight minutes unless the parallelism was utilised to cut the time by a factor of five. The subsequent runs might only involve setting one chi wheel, giving a short run taking about two minutes. Initially, after the initial long run, the choice of the next algorithm to be tried was specified by the cryptanalyst. Experience showed, however, that decision trees for this iterative process could be produced for use by the Wren operators in a proportion of cases.
Influence and fate
Although the Colossus was the first of the electronic digital machines with programmability, albeit limited by modern standards, it was not a general-purpose machine, being designed for a range of cryptanalytic tasks, most involving counting the results of evaluating Boolean algorithms.
A Colossus computer was thus not a fully Turing complete machine. However, University of San Francisco professor Benjamin Wells has shown that if all ten Colossus machines made were rearranged in a specific cluster, then the entire set of computers could have simulated a universal Turing machine, and thus be Turing complete.
Colossus and the reasons for its construction were highly secret and remained so for 30 years after the War. Consequently, it was not included in the history of computing hardware for many years, and Flowers and his associates were deprived of the recognition they were due. All but two of the Colossi were dismantled after the war and parts returned to the Post Office. Some parts, sanitised as to their original purpose, were taken to Max Newman's Royal Society Computing Machine Laboratory at Manchester University. Two Colossi, along with two Tunny machines, were retained and moved to GCHQ's new headquarters at Eastcote in April 1946, and then to Cheltenham between 1952 and 1954. One of the Colossi, known as Colossus Blue, was dismantled in 1959; the other in the 1960s. Tommy Flowers was ordered to destroy all documentation. He duly burnt them in a furnace and later said of that order:
The Colossi were adapted for other purposes, with varying degrees of success; in their later years they were used for training. Jack Good related how he was the first to use Colossus after the war, persuading the US National Security Agency that it could be used to perform a function for which they were planning to build a special-purpose machine. Colossus was also used to perform character counts on one-time pad tape to test for non-randomness.
A small number of people who were associated with Colossus—and knew that large-scale, reliable, high-speed electronic digital computing devices were feasible—played significant roles in early computer work in the UK and probably in the US. However, being so secret, it had little direct influence on the development of later computers; it was EDVAC that was the seminal computer architecture of the time. In 1972, Herman Goldstine, who was unaware of Colossus and its legacy to the projects of people such as Alan Turing (ACE), Max Newman (Manchester computers) and Harry Huskey (Bendix G-15), wrote that,
Professor Brian Randell, who unearthed information about Colossus in the 1970s, commented on this, saying that:
Randell's efforts started to bear fruit in the mid-1970s. The secrecy about Bletchley Park had been broken when Group Captain Winterbotham published his book The Ultra Secret in 1974. Randell was researching the history of computer science in Britain for a conference on the history of computing held at the Los Alamos Scientific Laboratory, New Mexico on 10–15 June 1976, and got permission to present a paper on wartime development of the COLOSSI at the Post Office Research Station, Dollis Hill (in October 1975 the British Government had released a series of captioned photographs from the Public Record Office). The interest in the "revelations" in his paper resulted in a special evening meeting when Randell and Coombs answered further questions. Coombs later wrote that no member of our team could ever forget the fellowship, the sense of purpose and, above all, the breathless excitement of those days. In 1977 Randell published an article The First Electronic Computer in several journals.
In October 2000, a 500-page technical report on the Tunny cipher and its cryptanalysis—entitled General Report on Tunny—was released by GCHQ to the national Public Record Office, and it contains a fascinating paean to Colossus by the cryptographers who worked with it:
Reconstruction
A team led by Tony Sale built a fully functional reconstruction of a Colossus Mark 2 between 1993 and 2008. In spite of the blueprints and hardware being destroyed, a surprising amount of material had survived, mainly in engineers' notebooks, but a considerable amount of it in the U.S. The optical tape reader might have posed the biggest problem, but Dr. Arnold Lynch, its original designer was able to redesign it to his own original specification. The reconstruction is on display, in the historically correct place for Colossus No. 9, at The National Museum of Computing, in H Block Bletchley Park in Milton Keynes, Buckinghamshire.
In November 2007, to celebrate the project completion and to mark the start of a fundraising initiative for The National Museum of Computing, a Cipher Challenge pitted the rebuilt Colossus against radio amateurs worldwide in being first to receive and decode three messages enciphered using the Lorenz SZ42 and transmitted from radio station DL0HNF in the Heinz Nixdorf MuseumsForum computer museum. The challenge was easily won by radio amateur Joachim Schüth, who had carefully prepared for the event and developed his own signal processing and code-breaking code using Ada. The Colossus team were hampered by their wish to use World War II radio equipment, delaying them by a day because of poor reception conditions. Nevertheless, the victor's 1.4 GHz laptop, running his own code, took less than a minute to find the settings for all 12 wheels. The German codebreaker said: "My laptop digested ciphertext at a speed of 1.2 million characters per second—240 times faster than Colossus. If you scale the CPU frequency by that factor, you get an equivalent clock of 5.8 MHz for Colossus. That is a remarkable speed for a computer built in 1944."
The Cipher Challenge verified the successful completion of the rebuilding project. "On the strength of today's performance Colossus is as good as it was six decades ago", commented Tony Sale. "We are delighted to have produced a fitting tribute to the people who worked at Bletchley Park and whose brainpower devised these fantastic machines which broke these ciphers and shortened the war by many months."
Other meanings
There was a fictional computer named Colossus in the 1970 film Colossus: The Forbin Project which was based on the 1966 novel Colossus by D. F. Jones. This was a coincidence as it pre-dates the public release of information about Colossus, or even its name.
Neal Stephenson's novel Cryptonomicon (1999) also contains a fictional treatment of the historical role played by Turing and Bletchley Park.
| Technology | Early computers | null |
6233 | https://en.wikipedia.org/wiki/Connected%20space | Connected space | In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint non-empty open subsets. Connectedness is one of the principal topological properties that are used to distinguish topological spaces.
A subset of a topological space is a if it is a connected space when viewed as a subspace of .
Some related but stronger conditions are path connected, simply connected, and -connected. Another related notion is locally connected, which neither implies nor follows from connectedness.
Formal definition
A topological space is said to be if it is the union of two disjoint non-empty open sets. Otherwise, is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice.
For a topological space the following conditions are equivalent:
is connected, that is, it cannot be divided into two disjoint non-empty open sets.
The only subsets of which are both open and closed (clopen sets) are and the empty set.
The only subsets of with empty boundary are and the empty set.
cannot be written as the union of two non-empty separated sets (sets for which each is disjoint from the other's closure).
All continuous functions from to are constant, where is the two-point space endowed with the discrete topology.
Historically this modern formulation of the notion of connectedness (in terms of no partition of into two separated sets) first appeared (independently) with N.J. Lennes, Frigyes Riesz, and Felix Hausdorff at the beginning of the 20th century. See for details.
Connected components
Given some point in a topological space the union of any collection of connected subsets such that each contains will once again be a connected subset.
The connected component of a point in is the union of all connected subsets of that contain it is the unique largest (with respect to ) connected subset of that contains
The maximal connected subsets (ordered by inclusion ) of a non-empty topological space are called the connected components of the space.
The components of any topological space form a partition of : they are disjoint, non-empty and their union is the whole space.
Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets (singletons), which are not open. Proof: Any two distinct rational numbers are in different components. Take an irrational number and then set and Then is a separation of and . Thus each component is a one-point set.
Let be the connected component of in a topological space and be the intersection of all clopen sets containing (called quasi-component of ). Then where the equality holds if is compact Hausdorff or locally connected.
Disconnected spaces
A space in which all components are one-point sets is called . Related to this property, a space is called if, for any two distinct elements and of , there exist disjoint open sets containing and containing such that is the union of and . Clearly, any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers , and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff.
Examples
The closed interval in the standard subspace topology is connected; although it can, for example, be written as the union of and the second set is not open in the chosen topology of
The union of and is disconnected; both of these intervals are open in the standard topological space
is disconnected.
A convex subset of is connected; it is actually simply connected.
A Euclidean plane excluding the origin, is connected, but is not simply connected. The three-dimensional Euclidean space without the origin is connected, and even simply connected. In contrast, the one-dimensional Euclidean space without the origin is not connected.
A Euclidean plane with a straight line removed is not connected since it consists of two half-planes.
, the space of real numbers with the usual topology, is connected.
The Sorgenfrey line is disconnected.
If even a single point is removed from , the remainder is disconnected. However, if even a countable infinity of points are removed from , where the remainder is connected. If , then remains simply connected after removal of countably many points.
Any topological vector space, e.g. any Hilbert space or Banach space, over a connected field (such as or ), is simply connected.
Every discrete topological space with at least two elements is disconnected, in fact such a space is totally disconnected. The simplest example is the discrete two-point space.
On the other hand, a finite set might be connected. For example, the spectrum of a discrete valuation ring consists of two points and is connected. It is an example of a Sierpiński space.
The Cantor set is totally disconnected; since the set contains uncountably many points, it has uncountably many components.
If a space is homotopy equivalent to a connected space, then is itself connected.
The topologist's sine curve is an example of a set that is connected but is neither path connected nor locally connected.
The general linear group (that is, the group of -by- real, invertible matrices) consists of two connected components: the one with matrices of positive determinant and the other of negative determinant. In particular, it is not connected. In contrast, is connected. More generally, the set of invertible bounded operators on a complex Hilbert space is connected.
The spectra of commutative local ring and integral domains are connected. More generally, the following are equivalent
The spectrum of a commutative ring is connected
Every finitely generated projective module over has constant rank.
has no idempotent (i.e., is not a product of two rings in a nontrivial way).
An example of a space that is not connected is a plane with an infinite line deleted from it. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the subspace topology induced by two-dimensional Euclidean space.
Path connectedness
A is a stronger notion of connectedness, requiring the structure of a path. A path from a point to a point in a topological space is a continuous function from the unit interval to with and . A of is an equivalence class of under the equivalence relation which makes equivalent to if and only if there is a path from to . The space is said to be path-connected (or pathwise connected or -connected) if there is exactly one path-component. For non-empty spaces, this is equivalent to the statement that there is a path joining any two points in . Again, many authors exclude the empty space.
Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line and the topologist's sine curve.
Subsets of the real line are connected if and only if they are path-connected; these subsets are the intervals and rays of .
Also, open subsets of or are connected if and only if they are path-connected.
Additionally, connectedness and path-connectedness are the same for finite topological spaces.
Arc connectedness
A space is said to be arc-connected or arcwise connected if any two topologically distinguishable points can be joined by an arc, which is an embedding . An arc-component of is a maximal arc-connected subset of ; or equivalently an equivalence class of the equivalence relation of whether two points can be joined by an arc or by a path whose points are topologically indistinguishable.
Every Hausdorff space that is path-connected is also arc-connected; more generally this is true for a -Hausdorff space, which is a space where each image of a path is closed. An example of a space which is path-connected but not arc-connected is given by the line with two origins; its two copies of can be connected by a path but not by an arc.
Intuition for path-connected spaces does not readily transfer to arc-connected spaces. Let be the line with two origins. The following are facts whose analogues hold for path-connected spaces, but do not hold for arc-connected spaces:
Continuous image of arc-connected space may not be arc-connected: for example, a quotient map from an arc-connected space to its quotient with countably many (at least 2) topologically distinguishable points cannot be arc-connected due to too small cardinality.
Arc-components may not be disjoint. For example, has two overlapping arc-components.
Arc-connected product space may not be a product of arc-connected spaces. For example, is arc-connected, but is not.
Arc-components of a product space may not be products of arc-components of the marginal spaces. For example, has a single arc-component, but has two arc-components.
If arc-connected subsets have a non-empty intersection, then their union may not be arc-connected. For example, the arc-components of intersect, but their union is not arc-connected.
Local connectedness
A topological space is said to be locally connected at a point if every neighbourhood of contains a connected open neighbourhood. It is locally connected if it has a base of connected sets. It can be shown that a space is locally connected if and only if every component of every open set of is open.
Similarly, a topological space is said to be if it has a base of path-connected sets.
An open subset of a locally path-connected space is connected if and only if it is path-connected.
This generalizes the earlier statement about and , each of which is locally path-connected. More generally, any topological manifold is locally path-connected.
Locally connected does not imply connected, nor does locally path-connected imply path connected. A simple example of a locally connected (and locally path-connected) space that is not connected (or path-connected) is the union of two separated intervals in , such as .
A classical example of a connected space that is not locally connected is the so-called topologist's sine curve, defined as , with the Euclidean topology induced by inclusion in .
Set operations
The intersection of connected sets is not necessarily connected.
The union of connected sets is not necessarily connected, as can be seen by considering .
Each ellipse is a connected set, but the union is not connected, since it can be partitioned to two disjoint open sets and .
This means that, if the union is disconnected, then the collection can be partitioned to two sub-collections, such that the unions of the sub-collections are disjoint and open in (see picture). This implies that in several cases, a union of connected sets necessarily connected. In particular:
If the common intersection of all sets is not empty (), then obviously they cannot be partitioned to collections with disjoint unions. Hence the union of connected sets with non-empty intersection is connected.
If the intersection of each pair of sets is not empty () then again they cannot be partitioned to collections with disjoint unions, so their union must be connected.
If the sets can be ordered as a "linked chain", i.e. indexed by integer indices and , then again their union must be connected.
If the sets are pairwise-disjoint and the quotient space is connected, then must be connected. Otherwise, if is a separation of then is a separation of the quotient space (since are disjoint and open in the quotient space).
The set difference of connected sets is not necessarily connected. However, if and their difference is disconnected (and thus can be written as a union of two open sets and ), then the union of with each such component is connected (i.e. is connected for all ).
Theorems
Main theorem of connectedness: Let and be topological spaces and let be a continuous function. If is (path-)connected then the image is (path-)connected. This result can be considered a generalization of the intermediate value theorem.
Every path-connected space is connected.
In a locally path-connected space, every open connected set is path-connected.
Every locally path-connected space is locally connected.
A locally path-connected space is path-connected if and only if it is connected.
The closure of a connected subset is connected. Furthermore, any subset between a connected subset and its closure is connected.
The connected components are always closed (but in general not open)
The connected components of a locally connected space are also open.
The connected components of a space are disjoint unions of the path-connected components (which in general are neither open nor closed).
Every quotient of a connected (resp. locally connected, path-connected, locally path-connected) space is connected (resp. locally connected, path-connected, locally path-connected).
Every product of a family of connected (resp. path-connected) spaces is connected (resp. path-connected).
Every open subset of a locally connected (resp. locally path-connected) space is locally connected (resp. locally path-connected).
Every manifold is locally path-connected.
Arc-wise connected space is path connected, but path-wise connected space may not be arc-wise connected
Continuous image of arc-wise connected set is arc-wise connected.
Graphs
Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them.
But it is not always possible to find a topology on the set of points which induces the same connected sets. The 5-cycle graph (and any -cycle with odd) is one such example.
As a consequence, a notion of connectedness can be formulated independently of the topology on a space. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets . Topological spaces and graphs are special cases of connective spaces; indeed, the finite connective spaces are precisely the finite graphs.
However, every graph can be canonically made into a topological space, by treating vertices as points and edges as copies of the unit interval (see topological graph theory#Graphs as topological spaces). Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space.
Stronger forms of connectedness
There are stronger forms of connectedness for topological spaces, for instance:
If there exist no two disjoint non-empty open sets in a topological space , must be connected, and thus hyperconnected spaces are also connected.
Since a simply connected space is, by definition, also required to be path connected, any simply connected space is also connected. If the "path connectedness" requirement is dropped from the definition of simple connectivity, a simply connected space does not need to be connected.
Yet stronger versions of connectivity include the notion of a contractible space. Every contractible space is path connected and thus also connected.
In general, any path connected space must be connected but there exist connected spaces that are not path connected. The deleted comb space furnishes such an example, as does the above-mentioned topologist's sine curve.
| Mathematics | Topology | null |
6235 | https://en.wikipedia.org/wiki/Cell%20nucleus | Cell nucleus | The cell nucleus (; : nuclei) is a membrane-bound organelle found in eukaryotic cells. Eukaryotic cells usually have a single nucleus, but a few cell types, such as mammalian red blood cells, have no nuclei, and a few others including osteoclasts have many. The main structures making up the nucleus are the nuclear envelope, a double membrane that encloses the entire organelle and isolates its contents from the cellular cytoplasm; and the nuclear matrix, a network within the nucleus that adds mechanical support.
The cell nucleus contains nearly all of the cell's genome. Nuclear DNA is often organized into multiple chromosomes – long strands of DNA dotted with various proteins, such as histones, that protect and organize the DNA. The genes within these chromosomes are structured in such a way to promote cell function. The nucleus maintains the integrity of genes and controls the activities of the cell by regulating gene expression.
Because the nuclear envelope is impermeable to large molecules, nuclear pores are required to regulate nuclear transport of molecules across the envelope. The pores cross both nuclear membranes, providing a channel through which larger molecules must be actively transported by carrier proteins while allowing free movement of small molecules and ions. Movement of large molecules such as proteins and RNA through the pores is required for both gene expression and the maintenance of chromosomes. Although the interior of the nucleus does not contain any membrane-bound subcompartments, a number of nuclear bodies exist, made up of unique proteins, RNA molecules, and particular parts of the chromosomes. The best-known of these is the nucleolus, involved in the assembly of ribosomes.
Chromosomes
The cell nucleus contains the majority of the cell's genetic material in the form of multiple linear DNA molecules organized into structures called chromosomes. Each human cell contains roughly two meters of DNA. During most of the cell cycle these are organized in a DNA-protein complex known as chromatin, and during cell division the chromatin can be seen to form the well-defined chromosomes familiar from a karyotype. A small fraction of the cell's genes are located instead in the mitochondria.
There are two types of chromatin. Euchromatin is the less compact DNA form, and contains genes that are frequently expressed by the cell. The other type, heterochromatin, is the more compact form, and contains DNA that is infrequently transcribed. This structure is further categorized into facultative heterochromatin, consisting of genes that are organized as heterochromatin only in certain cell types or at certain stages of development, and constitutive heterochromatin that consists of chromosome structural components such as telomeres and centromeres. During interphase the chromatin organizes itself into discrete individual patches, called chromosome territories. Active genes, which are generally found in the euchromatic region of the chromosome, tend to be located towards the chromosome's territory boundary.
Antibodies to certain types of chromatin organization, in particular, nucleosomes, have been associated with a number of autoimmune diseases, such as systemic lupus erythematosus. These are known as anti-nuclear antibodies (ANA) and have also been observed in concert with multiple sclerosis as part of general immune system dysfunction.
Nuclear structures and landmarks
The nucleus contains nearly all of the cell's DNA, surrounded by a network of fibrous intermediate filaments called the nuclear matrix, and is enveloped in a double membrane called the nuclear envelope. The nuclear envelope separates the fluid inside the nucleus, called the nucleoplasm, from the rest of the cell. The size of the nucleus is correlated to the size of the cell, and this ratio is reported across a range of cell types and species. In eukaryotes the nucleus in many cells typically occupies 10% of the cell volume. The nucleus is the largest organelle in animal cells. In human cells, the diameter of the nucleus is approximately six micrometres (μm).
Nuclear envelope and pores
The nuclear envelope consists of two membranes, an inner and an outer nuclear membrane, perforated by nuclear pores. Together, these membranes serve to separate the cell's genetic material from the rest of the cell contents, and allow the nucleus to maintain an environment distinct from the rest of the cell. Despite their close apposition around much of the nucleus, the two membranes differ substantially in shape and contents. The inner membrane surrounds the nuclear content, providing its defining edge. Embedded within the inner membrane, various proteins bind the intermediate filaments that give the nucleus its structure. The outer membrane encloses the inner membrane, and is continuous with the adjacent endoplasmic reticulum membrane. As part of the endoplasmic reticulum membrane, the outer nuclear membrane is studded with ribosomes that are actively translating proteins across membrane. The space between the two membranes is called the perinuclear space, and is continuous with the endoplasmic reticulum lumen.
In a mammalian nuclear envelope there are between 3000 and 4000 nuclear pore complexes (NPCs) perforating the envelope. Each NPC contains an eightfold-symmetric ring-shaped structure at a position where the inner and outer membranes fuse. The number of NPCs can vary considerably across cell types; small glial cells only have about a few hundred, with large Purkinje cells having around 20,000. The NPC provides selective transport of molecules between the nucleoplasm and the cytosol. The nuclear pore complex is composed of approximately thirty different proteins known as nucleoporins. The pores are about 60–80 million daltons in molecular weight and consist of around 50 (in yeast) to several hundred proteins (in vertebrates). The pores are 100 nm in total diameter; however, the gap through which molecules freely diffuse is only about 9 nm wide, due to the presence of regulatory systems within the center of the pore. This size selectively allows the passage of small water-soluble molecules while preventing larger molecules, such as nucleic acids and larger proteins, from inappropriately entering or exiting the nucleus. These large molecules must be actively transported into the nucleus instead. Attached to the ring is a structure called the nuclear basket that extends into the nucleoplasm, and a series of filamentous extensions that reach into the cytoplasm. Both structures serve to mediate binding to nuclear transport proteins.
Most proteins, ribosomal subunits, and some RNAs are transported through the pore complexes in a process mediated by a family of transport factors known as karyopherins. Those karyopherins that mediate movement into the nucleus are also called importins, whereas those that mediate movement out of the nucleus are called exportins. Most karyopherins interact directly with their cargo, although some use adaptor proteins. Steroid hormones such as cortisol and aldosterone, as well as other small lipid-soluble molecules involved in intercellular signaling, can diffuse through the cell membrane and into the cytoplasm, where they bind nuclear receptor proteins that are trafficked into the nucleus. There they serve as transcription factors when bound to their ligand; in the absence of a ligand, many such receptors function as histone deacetylases that repress gene expression.
Nuclear lamina
In animal cells, two networks of intermediate filaments provide the nucleus with mechanical support: The nuclear lamina forms an organized meshwork on the internal face of the envelope, while less organized support is provided on the cytosolic face of the envelope. Both systems provide structural support for the nuclear envelope and anchoring sites for chromosomes and nuclear pores.
The nuclear lamina is composed mostly of lamin proteins. Like all proteins, lamins are synthesized in the cytoplasm and later transported to the nucleus interior, where they are assembled before being incorporated into the existing network of nuclear lamina. Lamins found on the cytosolic face of the membrane, such as emerin and nesprin, bind to the cytoskeleton to provide structural support. Lamins are also found inside the nucleoplasm where they form another regular structure, known as the nucleoplasmic veil, that is visible using fluorescence microscopy. The actual function of the veil is not clear, although it is excluded from the nucleolus and is present during interphase. Lamin structures that make up the veil, such as LEM3, bind chromatin and disrupting their structure inhibits transcription of protein-coding genes.
Like the components of other intermediate filaments, the lamin monomer contains an alpha-helical domain used by two monomers to coil around each other, forming a dimer structure called a coiled coil. Two of these dimer structures then join side by side, in an antiparallel arrangement, to form a tetramer called a protofilament. Eight of these protofilaments form a lateral arrangement that is twisted to form a ropelike filament. These filaments can be assembled or disassembled in a dynamic manner, meaning that changes in the length of the filament depend on the competing rates of filament addition and removal.
Mutations in lamin genes leading to defects in filament assembly cause a group of rare genetic disorders known as laminopathies. The most notable laminopathy is the family of diseases known as progeria, which causes the appearance of premature aging in those with the condition. The exact mechanism by which the associated biochemical changes give rise to the aged phenotype is not well understood.
Nucleolus
The nucleolus is the largest of the discrete densely stained, membraneless structures known as nuclear bodies found in the nucleus. It forms around tandem repeats of rDNA, DNA coding for ribosomal RNA (rRNA). These regions are called nucleolar organizer regions (NOR). The main roles of the nucleolus are to synthesize rRNA and assemble ribosomes. The structural cohesion of the nucleolus depends on its activity, as ribosomal assembly in the nucleolus results in the transient association of nucleolar components, facilitating further ribosomal assembly, and hence further association. This model is supported by observations that inactivation of rDNA results in intermingling of nucleolar structures.
In the first step of ribosome assembly, a protein called RNA polymerase I transcribes rDNA, which forms a large pre-rRNA precursor. This is cleaved into two large rRNA subunits – 5.8S, and 28S, and a small rRNA subunit 18S. The transcription, post-transcriptional processing, and assembly of rRNA occurs in the nucleolus, aided by small nucleolar RNA (snoRNA) molecules, some of which are derived from spliced introns from messenger RNAs encoding genes related to ribosomal function. The assembled ribosomal subunits are the largest structures passed through the nuclear pores.
When observed under the electron microscope, the nucleolus can be seen to consist of three distinguishable regions: the innermost fibrillar centers (FCs), surrounded by the dense fibrillar component (DFC) (that contains fibrillarin and nucleolin), which in turn is bordered by the granular component (GC) (that contains the protein nucleophosmin). Transcription of the rDNA occurs either in the FC or at the FC-DFC boundary, and, therefore, when rDNA transcription in the cell is increased, more FCs are detected. Most of the cleavage and modification of rRNAs occurs in the DFC, while the latter steps involving protein assembly onto the ribosomal subunits occur in the GC.
Splicing speckles
Speckles are subnuclear structures that are enriched in pre-messenger RNA splicing factors and are located in the interchromatin regions of the nucleoplasm of mammalian cells.
At the fluorescence-microscope level they appear as irregular, punctate structures, which vary in size and shape, and when examined by electron microscopy they are seen as clusters of interchromatin granules. Speckles are dynamic structures, and both their protein and RNA-protein components can cycle continuously between speckles and other nuclear locations, including active transcription sites. Speckles can work with p53 as enhancers of gene activity to directly enhance the activity of certain genes. Moreover, speckle-associating and non-associating p53 gene targets are functionally distinct.
Studies on the composition, structure and behaviour of speckles have provided a model for understanding the functional compartmentalization of the nucleus and the organization of the gene-expression machinery splicing snRNPs and other splicing proteins necessary for pre-mRNA processing. Because of a cell's changing requirements, the composition and location of these bodies changes according to mRNA transcription and regulation via phosphorylation of specific proteins. The splicing speckles are also known as nuclear speckles (nuclear specks), splicing factor compartments (SF compartments), interchromatin granule clusters (IGCs), and B snurposomes.
B snurposomes are found in the amphibian oocyte nuclei and in Drosophila melanogaster embryos. B snurposomes appear alone or attached to the Cajal bodies in the electron micrographs of the amphibian nuclei. While nuclear speckles were originally thought to be storage sites for the splicing factors, a more recent study demonstrated that organizing genes and pre-mRNA substrates near speckles increases the kinetic efficiency of pre-mRNA splicing, ultimately boosting protein levels by modulation of splicing.
Cajal bodies and gems
A nucleus typically contains between one and ten compact structures called Cajal bodies or coiled bodies (CB), whose diameter measures between 0.2 μm and 2.0 μm depending on the cell type and species. When seen under an electron microscope, they resemble balls of tangled thread and are dense foci of distribution for the protein coilin. CBs are involved in a number of different roles relating to RNA processing, specifically small nucleolar RNA (snoRNA) and small nuclear RNA (snRNA) maturation, and histone mRNA modification.
Similar to Cajal bodies are Gemini of Cajal bodies, or gems, whose name is derived from the Gemini constellation in reference to their close "twin" relationship with CBs. Gems are similar in size and shape to CBs, and in fact are virtually indistinguishable under the microscope. Unlike CBs, gems do not contain small nuclear ribonucleoproteins (snRNPs), but do contain a protein called survival of motor neuron (SMN) whose function relates to snRNP biogenesis. Gems are believed to assist CBs in snRNP biogenesis, though it has also been suggested from microscopy evidence that CBs and gems are different manifestations of the same structure. Later ultrastructural studies have shown gems to be twins of Cajal bodies with the difference being in the coilin component; Cajal bodies are SMN positive and coilin positive, and gems are SMN positive and coilin negative.
Other nuclear bodies
Beyond the nuclear bodies first described by Santiago Ramón y Cajal above (e.g., nucleolus, nuclear speckles, Cajal bodies) the nucleus contains a number of other nuclear bodies. These include polymorphic interphase karyosomal association (PIKA), promyelocytic leukaemia (PML) bodies, and paraspeckles. Although little is known about a number of these domains, they are significant in that they show that the nucleoplasm is not a uniform mixture, but rather contains organized functional subdomains.
Other subnuclear structures appear as part of abnormal disease processes. For example, the presence of small intranuclear rods has been reported in some cases of nemaline myopathy. This condition typically results from mutations in actin, and the rods themselves consist of mutant actin as well as other cytoskeletal proteins.
PIKA and PTF domains
PIKA domains, or polymorphic interphase karyosomal associations, were first described in microscopy studies in 1991. Their function remains unclear, though they were not thought to be associated with active DNA replication, transcription, or RNA processing. They have been found to often associate with discrete domains defined by dense localization of the transcription factor PTF, which promotes transcription of small nuclear RNA (snRNA).
PML-nuclear bodies
Promyelocytic leukemia protein (PML-nuclear bodies) are spherical bodies found scattered throughout the nucleoplasm, measuring around 0.1–1.0 μm. They are known by a number of other names, including nuclear domain 10 (ND10), Kremer bodies, and PML oncogenic domains. PML-nuclear bodies are named after one of their major components, the promyelocytic leukemia protein (PML). They are often seen in the nucleus in association with Cajal bodies and cleavage bodies. Pml-/- mice, which are unable to create PML-nuclear bodies, develop normally without obvious ill effects, showing that PML-nuclear bodies are not required for most essential biological processes.
Paraspeckles
Discovered by Fox et al. in 2002, paraspeckles are irregularly shaped compartments in the interchromatin space of the nucleus. First documented in HeLa cells, where there are generally 10–30 per nucleus, paraspeckles are now known to also exist in all human primary cells, transformed cell lines, and tissue sections. Their name is derived from their distribution in the nucleus; the "para" is short for parallel and the "speckles" refers to the splicing speckles to which they are always in close proximity.
Paraspeckles sequester nuclear proteins and RNA and thus appear to function as a molecular sponge that is involved in the regulation of gene expression. Furthermore, paraspeckles are dynamic structures that are altered in response to changes in cellular metabolic activity. They are transcription dependent and in the absence of RNA Pol II transcription, the paraspeckle disappears and all of its associated protein components (PSP1, p54nrb, PSP2, CFI(m)68, and PSF) form a crescent shaped perinucleolar cap in the nucleolus. This phenomenon is demonstrated during the cell cycle. In the cell cycle, paraspeckles are present during interphase and during all of mitosis except for telophase. During telophase, when the two daughter nuclei are formed, there is no RNA Pol II transcription so the protein components instead form a perinucleolar cap.
Perichromatin fibrils
Perichromatin fibrils are visible only under electron microscope. They are located next to the transcriptionally active chromatin and are hypothesized to be the sites of active pre-mRNA processing.
Clastosomes
Clastosomes are small nuclear bodies (0.2–0.5 μm) described as having a thick ring-shape due to the peripheral capsule around these bodies. This name is derived from the Greek klastos (κλαστός), broken and soma (σῶμα), body. Clastosomes are not typically present in normal cells, making them hard to detect. They form under high proteolytic conditions within the nucleus and degrade once there is a decrease in activity or if cells are treated with proteasome inhibitors. The scarcity of clastosomes in cells indicates that they are not required for proteasome function. Osmotic stress has also been shown to cause the formation of clastosomes. These nuclear bodies contain catalytic and regulatory subunits of the proteasome and its substrates, indicating that clastosomes are sites for degrading proteins.
Function
The nucleus provides a site for genetic transcription that is segregated from the location of translation in the cytoplasm, allowing levels of gene regulation that are not available to prokaryotes. The main function of the cell nucleus is to control gene expression and mediate the replication of DNA during the cell cycle.
Cell compartmentalization
The nuclear envelope allows control of the nuclear contents, and separates them from the rest of the cytoplasm where necessary. This is important for controlling processes on either side of the nuclear membrane: In most cases where a cytoplasmic process needs to be restricted, a key participant is removed to the nucleus, where it interacts with transcription factors to downregulate the production of certain enzymes in the pathway. This regulatory mechanism occurs in the case of glycolysis, a cellular pathway for breaking down glucose to produce energy. Hexokinase is an enzyme responsible for the first step of glycolysis, forming glucose-6-phosphate from glucose. At high concentrations of fructose-6-phosphate, a molecule made later from glucose-6-phosphate, a regulator protein removes hexokinase to the nucleus, where it forms a transcriptional repressor complex with nuclear proteins to reduce the expression of genes involved in glycolysis.
In order to control which genes are being transcribed, the cell separates some transcription factor proteins responsible for regulating gene expression from physical access to the DNA until they are activated by other signaling pathways. This prevents even low levels of inappropriate gene expression. For example, in the case of NF-κB-controlled genes, which are involved in most inflammatory responses, transcription is induced in response to a signal pathway such as that initiated by the signaling molecule TNF-α, binds to a cell membrane receptor, resulting in the recruitment of signalling proteins, and eventually activating the transcription factor NF-κB. A nuclear localisation signal on the NF-κB protein allows it to be transported through the nuclear pore and into the nucleus, where it stimulates the transcription of the target genes.
The compartmentalization allows the cell to prevent translation of unspliced mRNA. Eukaryotic mRNA contains introns that must be removed before being translated to produce functional proteins. The splicing is done inside the nucleus before the mRNA can be accessed by ribosomes for translation. Without the nucleus, ribosomes would translate newly transcribed (unprocessed) mRNA, resulting in malformed and nonfunctional proteins.
Replication
The main function of the cell nucleus is to control gene expression and mediate the replication of DNA during the cell cycle. It has been found that replication happens in a localised way in the cell nucleus. In the S phase of interphase of the cell cycle; replication takes place. Contrary to the traditional view of moving replication forks along stagnant DNA, a concept of replication factories emerged, which means replication forks are concentrated towards some immobilised 'factory' regions through which the template DNA strands pass like conveyor belts.
Gene expression
Gene expression first involves transcription, in which DNA is used as a template to produce RNA. In the case of genes encoding proteins, that RNA produced from this process is messenger RNA (mRNA), which then needs to be translated by ribosomes to form a protein. As ribosomes are located outside the nucleus, mRNA produced needs to be exported.
Since the nucleus is the site of transcription, it also contains a variety of proteins that either directly mediate transcription or are involved in regulating the process. These proteins include helicases, which unwind the double-stranded DNA molecule to facilitate access to it, RNA polymerases, which bind to the DNA promoter to synthesize the growing RNA molecule, topoisomerases, which change the amount of supercoiling in DNA, helping it wind and unwind, as well as a large variety of transcription factors that regulate expression.
Processing of pre-mRNA
Newly synthesized mRNA molecules are known as primary transcripts or pre-mRNA. They must undergo post-transcriptional modification in the nucleus before being exported to the cytoplasm; mRNA that appears in the cytoplasm without these modifications is degraded rather than used for protein translation. The three main modifications are 5' capping, 3' polyadenylation, and RNA splicing. While in the nucleus, pre-mRNA is associated with a variety of proteins in complexes known as heterogeneous ribonucleoprotein particles (hnRNPs). Addition of the 5' cap occurs co-transcriptionally and is the first step in post-transcriptional modification. The 3' poly-adenine tail is only added after transcription is complete.
RNA splicing, carried out by a complex called the spliceosome, is the process by which introns, or regions of DNA that do not code for protein, are removed from the pre-mRNA and the remaining exons connected to re-form a single continuous molecule. This process normally occurs after 5' capping and 3' polyadenylation but can begin before synthesis is complete in transcripts with many exons. Many pre-mRNAs can be spliced in multiple ways to produce different mature mRNAs that encode different protein sequences. This process is known as alternative splicing, and allows production of a large variety of proteins from a limited amount of DNA.
Dynamics and regulation
Nuclear transport
The entry and exit of large molecules from the nucleus is tightly controlled by the nuclear pore complexes. Although small molecules can enter the nucleus without regulation, macromolecules such as RNA and proteins require association karyopherins called importins to enter the nucleus and exportins to exit. "Cargo" proteins that must be translocated from the cytoplasm to the nucleus contain short amino acid sequences known as nuclear localization signals, which are bound by importins, while those transported from the nucleus to the cytoplasm carry nuclear export signals bound by exportins. The ability of importins and exportins to transport their cargo is regulated by GTPases, enzymes that hydrolyze the molecule guanosine triphosphate (GTP) to release energy. The key GTPase in nuclear transport is Ran, which is bound to either GTP or GDP (guanosine diphosphate), depending on whether it is located in the nucleus or the cytoplasm. Whereas importins depend on RanGTP to dissociate from their cargo, exportins require RanGTP in order to bind to their cargo.
Nuclear import depends on the importin binding its cargo in the cytoplasm and carrying it through the nuclear pore into the nucleus. Inside the nucleus, RanGTP acts to separate the cargo from the importin, allowing the importin to exit the nucleus and be reused. Nuclear export is similar, as the exportin binds the cargo inside the nucleus in a process facilitated by RanGTP, exits through the nuclear pore, and separates from its cargo in the cytoplasm.
Specialized export proteins exist for translocation of mature mRNA and tRNA to the cytoplasm after post-transcriptional modification is complete. This quality-control mechanism is important due to these molecules' central role in protein translation. Mis-expression of a protein due to incomplete excision of exons or mis-incorporation of amino acids could have negative consequences for the cell; thus, incompletely modified RNA that reaches the cytoplasm is degraded rather than used in translation.
Assembly and disassembly
During its lifetime, a nucleus may be broken down or destroyed, either in the process of cell division or as a consequence of apoptosis (the process of programmed cell death). During these events, the structural components of the nucleus — the envelope and lamina — can be systematically degraded.
In most cells, the disassembly of the nuclear envelope marks the end of the prophase of mitosis. However, this disassembly of the nucleus is not a universal feature of mitosis and does not occur in all cells. Some unicellular eukaryotes (e.g., yeasts) undergo so-called closed mitosis, in which the nuclear envelope remains intact. In closed mitosis, the daughter chromosomes migrate to opposite poles of the nucleus, which then divides in two. The cells of higher eukaryotes, however, usually undergo open mitosis, which is characterized by breakdown of the nuclear envelope. The daughter chromosomes then migrate to opposite poles of the mitotic spindle, and new nuclei reassemble around them.
At a certain point during the cell cycle in open mitosis, the cell divides to form two cells. In order for this process to be possible, each of the new daughter cells must have a full set of genes, a process requiring replication of the chromosomes as well as segregation of the separate sets. This occurs by the replicated chromosomes, the sister chromatids, attaching to microtubules, which in turn are attached to different centrosomes. The sister chromatids can then be pulled to separate locations in the cell. In many cells, the centrosome is located in the cytoplasm, outside the nucleus; the microtubules would be unable to attach to the chromatids in the presence of the nuclear envelope. Therefore, the early stages in the cell cycle, beginning in prophase and until around prometaphase, the nuclear membrane is dismantled. Likewise, during the same period, the nuclear lamina is also disassembled, a process regulated by phosphorylation of the lamins by protein kinases such as the CDC2 protein kinase. Towards the end of the cell cycle, the nuclear membrane is reformed, and around the same time, the nuclear lamina are reassembled by dephosphorylating the lamins.
However, in dinoflagellates, the nuclear envelope remains intact, the centrosomes are located in the cytoplasm, and the microtubules come in contact with chromosomes, whose centromeric regions are incorporated into the nuclear envelope (the so-called closed mitosis with extranuclear spindle). In many other protists (e.g., ciliates, sporozoans) and fungi, the centrosomes are intranuclear, and their nuclear envelope also does not disassemble during cell division.
Apoptosis is a controlled process in which the cell's structural components are destroyed, resulting in death of the cell. Changes associated with apoptosis directly affect the nucleus and its contents, for example, in the condensation of chromatin and the disintegration of the nuclear envelope and lamina. The destruction of the lamin networks is controlled by specialized apoptotic proteases called caspases, which cleave the lamin proteins and, thus, degrade the nucleus' structural integrity. Lamin cleavage is sometimes used as a laboratory indicator of caspase activity in assays for early apoptotic activity. Cells that express mutant caspase-resistant lamins are deficient in nuclear changes related to apoptosis, suggesting that lamins play a role in initiating the events that lead to apoptotic degradation of the nucleus. Inhibition of lamin assembly itself is an inducer of apoptosis.
The nuclear envelope acts as a barrier that prevents both DNA and RNA viruses from entering the nucleus. Some viruses require access to proteins inside the nucleus in order to replicate and/or assemble. DNA viruses, such as herpesvirus replicate and assemble in the cell nucleus, and exit by budding through the inner nuclear membrane. This process is accompanied by disassembly of the lamina on the nuclear face of the inner membrane.
Disease-related dynamics
Initially, it has been suspected that immunoglobulins in general and autoantibodies in particular do not enter the nucleus. Now there is a body of evidence that under pathological conditions (e.g. lupus erythematosus) IgG can enter the nucleus.
Nuclei per cell
Most eukaryotic cell types usually have a single nucleus, but some have no nuclei, while others have several. This can result from normal development, as in the maturation of mammalian red blood cells, or from faulty cell division.
Anucleated cells
An anucleated cell contains no nucleus and is, therefore, incapable of dividing to produce daughter cells. The best-known anucleated cell is the mammalian red blood cell, or erythrocyte, which also lacks other organelles such as mitochondria, and serves primarily as a transport vessel to ferry oxygen from the lungs to the body's tissues. Erythrocytes mature through erythropoiesis in the bone marrow, where they lose their nuclei, organelles, and ribosomes. The nucleus is expelled during the process of differentiation from an erythroblast to a reticulocyte, which is the immediate precursor of the mature erythrocyte. The presence of mutagens may induce the release of some immature "micronucleated" erythrocytes into the bloodstream. Anucleated cells can also arise from flawed cell division in which one daughter lacks a nucleus and the other has two nuclei.
In flowering plants, this condition occurs in sieve tube elements.
Multinucleated cells
Multinucleated cells contain multiple nuclei. Most acantharean species of protozoa and some fungi in mycorrhizae have naturally multinucleated cells. Other examples include the intestinal parasites in the genus Giardia, which have two nuclei per cell. Ciliates have two kinds of nuclei in a single cell, a somatic macronucleus and a germline micronucleus. In humans, skeletal muscle cells, also called myocytes and syncytium, become multinucleated during development; the resulting arrangement of nuclei near the periphery of the cells allows maximal intracellular space for myofibrils. Other multinucleate cells in the human are osteoclasts a type of bone cell. Multinucleated and binucleated cells can also be abnormal in humans; for example, cells arising from the fusion of monocytes and macrophages, known as giant multinucleated cells, sometimes accompany inflammation and are also implicated in tumor formation.
A number of dinoflagellates are known to have two nuclei. Unlike other multinucleated cells these nuclei contain two distinct lineages of DNA: one from the dinoflagellate and the other from a symbiotic diatom.
Evolution
As the major defining characteristic of the eukaryotic cell, the nucleus's evolutionary origin has been the subject of much speculation. Four major hypotheses have been proposed to explain the existence of the nucleus, although none have yet earned widespread support.
The first model known as the "syntrophic model" proposes that a symbiotic relationship between the archaea and bacteria created the nucleus-containing eukaryotic cell. (Organisms of the Archaeal and Bacterial domains have no cell nucleus.) It is hypothesized that the symbiosis originated when ancient archaea similar to modern methanogenic archaea, invaded and lived within bacteria similar to modern myxobacteria, eventually forming the early nucleus. This theory is analogous to the accepted theory for the origin of eukaryotic mitochondria and chloroplasts, which are thought to have developed from a similar endosymbiotic relationship between proto-eukaryotes and aerobic bacteria. One possibility is that the nuclear membrane arose as a new membrane system following the origin of mitochondria in an archaebacterial host. The nuclear membrane may have served to protect the genome from damaging reactive oxygen species produced by the protomitochondria. The archaeal origin of the nucleus is supported by observations that archaea and eukarya have similar genes for certain proteins, including histones. Observations that myxobacteria are motile, can form multicellular complexes, and possess kinases and G proteins similar to eukarya, support a bacterial origin for the eukaryotic cell.
A second model proposes that proto-eukaryotic cells evolved from bacteria without an endosymbiotic stage. This model is based on the existence of modern Planctomycetota bacteria that possess a nuclear structure with primitive pores and other compartmentalized membrane structures. A similar proposal states that a eukaryote-like cell, the chronocyte, evolved first and phagocytosed archaea and bacteria to generate the nucleus and the eukaryotic cell.
The most controversial model, known as viral eukaryogenesis, posits that the membrane-bound nucleus, along with other eukaryotic features, originated from the infection of a prokaryote by a virus. The suggestion is based on similarities between eukaryotes and viruses such as linear DNA strands, mRNA capping, and tight binding to proteins (analogizing histones to viral envelopes). One version of the proposal suggests that the nucleus evolved in concert with phagocytosis to form an early cellular "predator". Another variant proposes that eukaryotes originated from early archaea infected by poxviruses, on the basis of observed similarity between the DNA polymerases in modern poxviruses and eukaryotes. It has been suggested that the unresolved question of the evolution of sex could be related to the viral eukaryogenesis hypothesis.
A more recent proposal, the exomembrane hypothesis, suggests that the nucleus instead originated from a single ancestral cell that evolved a second exterior cell membrane; the interior membrane enclosing the original cell then became the nuclear membrane and evolved increasingly elaborate pore structures for passage of internally synthesized cellular components such as ribosomal subunits.
History
The nucleus was the first organelle to be discovered. What is most likely the oldest preserved drawing dates back to the early microscopist Antonie van Leeuwenhoek (1632–1723). He observed a "lumen", the nucleus, in the red blood cells of salmon. Unlike mammalian red blood cells, those of other vertebrates still contain nuclei.
The nucleus was also described by Franz Bauer in 1804 and in more detail in 1831 by Scottish botanist Robert Brown in a talk at the Linnean Society of London. Brown was studying orchids under the microscope when he observed an opaque area, which he called the "areola" or "nucleus", in the cells of the flower's outer layer. He did not suggest a potential function.
In 1838, Matthias Schleiden proposed that the nucleus plays a role in generating cells, thus he introduced the name "cytoblast" ("cell builder"). He believed that he had observed new cells assembling around "cytoblasts". Franz Meyen was a strong opponent of this view, having already described cells multiplying by division and believing that many cells would have no nuclei. The idea that cells can be generated de novo, by the "cytoblast" or otherwise, contradicted work by Robert Remak (1852) and Rudolf Virchow (1855) who decisively propagated the new paradigm that cells are generated solely by cells (""). The function of the nucleus remained unclear.
Between 1877 and 1878, Oscar Hertwig published several studies on the fertilization of sea urchin eggs, showing that the nucleus of the sperm enters the oocyte and fuses with its nucleus. This was the first time it was suggested that an individual develops from a (single) nucleated cell. This was in contradiction to Ernst Haeckel's theory that the complete phylogeny of a species would be repeated during embryonic development, including generation of the first nucleated cell from a "monerula", a structureless mass of primordial protoplasm ("Urschleim"). Therefore, the necessity of the sperm nucleus for fertilization was discussed for quite some time. However, Hertwig confirmed his observation in other animal groups, including amphibians and molluscs. Eduard Strasburger produced the same results for plants in 1884. This paved the way to assign the nucleus an important role in heredity. In 1873, August Weismann postulated the equivalence of the maternal and paternal germ cells for heredity. The function of the nucleus as carrier of genetic information became clear only later, after mitosis was discovered and the Mendelian rules were rediscovered at the beginning of the 20th century; the chromosome theory of heredity was therefore developed.
| Biology and health sciences | Organelles and other cell parts | null |
6246 | https://en.wikipedia.org/wiki/Covalent%20bond | Covalent bond | A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding.
Covalent bonding also includes many kinds of interactions, including σ-bonding, π-bonding, metal-to-metal bonding, agostic interactions, bent bonds, three-center two-electron bonds and three-center four-electron bonds. The term covalent bond dates from 1939. The prefix co- means jointly, associated in action, partnered to a lesser degree, etc.; thus a "co-valent bond", in essence, means that the atoms share "valence", such as is discussed in valence bond theory.
In the molecule , the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails the sharing of electrons over more than two atoms is said to be delocalized.
History
The term covalence in regard to bonding was first used in 1919 by Irving Langmuir in a Journal of the American Chemical Society article entitled "The Arrangement of Electrons in Atoms and Molecules". Langmuir wrote that "we shall denote by the term covalence the number of pairs of electrons that a given atom shares with its neighbors."
The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms (and in 1926 he also coined the term "photon" for the smallest unit of radiant energy). He introduced the Lewis notation or electron dot notation or Lewis dot structure, in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond-forming electron pairs represented as solid lines.
Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) – its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the n = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the n = 1 shell, which can hold only two.
While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms.
Types of covalent bonds
Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds.
Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out, resulting in a non-polar molecule.
Covalent structures
There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but generally, there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO2, CO2, and CH4. In molecular structures, there are weak forces of attraction. Such covalent substances are low-boiling-temperature liquids (such as ethanol), and low-melting-temperature solids (such as iodine and solid CO2). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3-dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures.
One- and three-electron bonds
Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the dihydrogen cation, . One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects.
The simplest example of three-electron bonding can be found in the helium dimer cation, . It is considered a "half bond" because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three-electron bonds.
Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities.
Resonance
There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule and its resulting experimentally-determined properties, hence a superposition of structures is needed. The same two atoms in such molecules can be bonded differently in different Lewis structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N–O interaction is = .
Aromaticity
In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4n + 2 (where n is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons (n = 1, 4n + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene.
In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behavior of aromatic ring bonds, which otherwise are equivalent.
Hypervalence
Certain molecules such as xenon difluoride and sulfur hexafluoride have higher coordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and resonance of sigma bonds in valence bond theory.
Electron deficiency
In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in boron hydrides such as diborane (B2H6), which are often described as electron deficient because there are not enough valence electrons to form localized (2-centre 2-electron) bonds joining all the atoms. However, the more modern description using 3c–2e bonds does provide enough bonding orbitals to connect all the atoms so that the molecules can instead be classified as electron-precise.
Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (the nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated.
Quantum mechanical description
After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states.
Comparison of VB and MO theories
The two theories represent two ways to build up the electron configuration of the molecule. For valence bond theory, the atomic hybrid orbitals are filled with electrons first to produce a fully bonded valence configuration, followed by performing a linear combination of contributing structures (resonance) if there are several of them. In contrast, for molecular orbital theory, a linear combination of atomic orbitals is performed first, followed by filling of the resulting molecular orbitals with electrons.
The two approaches are regarded as complementary, and each provides its own insights into the problem of chemical bonding. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. As molecular orbital theory builds the molecular wavefunction out of delocalized orbitals, it is more suited for the calculation of ionization energies and the understanding of spectral absorption bands.
At the qualitative level, both theories contain incorrect predictions. Simple (Heitler–London) valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple (Hartree–Fock) molecular orbital theory incorrectly predicts dissociation into a mixture of atoms and ions. On the other hand, simple molecular orbital theory correctly predicts Hückel's rule of aromaticity, while simple valence bond theory incorrectly predicts that cyclobutadiene has larger resonance energy than benzene.
Although the wavefunctions generated by both theories at the qualitative level do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction. This is done by combining the valence bond covalent function with the functions describing all possible ionic structures or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach overestimates the weight of the ionic structures while the simple valence bond approach neglects them. This can also be described as saying that the simple molecular orbital approach neglects electron correlation while the simple valence bond approach overestimates it.
Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. Molecular orbitals are orthogonal, which significantly increases the feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals.
Covalency from atomic contribution to the electronic density of states
Evaluation of bond covalency is dependent on the basis set for approximate quantum-chemical methods such as COOP (crystal orbital overlap population), COHP (Crystal orbital Hamilton population), and BCOOP (Balanced crystal orbital overlap population). To overcome this issue, an alternative formulation of the bond covalency can be provided in this way.
The mass center of an atomic orbital with quantum numbers for atom A is defined as
where is the contribution of the atomic orbital of the atom A to the total electronic density of states of the solid
where the outer sum runs over all atoms A of the unit cell. The energy window is chosen in such a way that it encompasses all of the relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along with the considered bond.
The relative position of the mass center of levels of atom A with respect to the mass center of levels of atom B is given as
where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is
where, for simplicity, we may omit the dependence from the principal quantum number in the notation referring to
In this formalism, the greater the value of the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent bond. The quantity is denoted as the covalency of the bond, which is specified in the same units of the energy .
Analogous effect in nuclear systems
An analogous effect to covalent binding is believed to occur in some nuclear systems, with the difference that the shared fermions are quarks rather than electrons. High energy proton-proton scattering cross-section indicates that quark interchange of either u or d quarks is the dominant process of the nuclear force at short distance. In particular, it dominates over the Yukawa interaction where a meson is exchanged. Therefore, covalent binding by quark interchange is expected to be the dominating mechanism of nuclear binding at small distance when the bound hadrons have covalence quarks in common.
| Physical sciences | Chemical bonds | null |
6271 | https://en.wikipedia.org/wiki/Chemical%20reaction | Chemical reaction | A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another. When chemical reactions occur, the atoms are rearranged and the reaction is accompanied by an energy change as new products are generated. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.
Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Some reactions produce heat and are called exothermic reactions, while others may require heat to enable the reaction to occur, which are called endothermic reactions. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.
A reaction may be classified as redox in which oxidation and reduction occur or non-redox in which there is no oxidation and reduction occurring. Most simple redox reactions may be classified as a combination, decomposition, or single displacement reaction.
Different chemical reactions are used during chemical synthesis in order to obtain the desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperature and concentrations present within a cell.
The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays and reactions between elementary particles, as described by quantum field theory.
History
Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth. In the Middle Ages, chemical transformations were studied by alchemists. They attempted, in particular, to convert lead into gold, for which purpose they used reactions of lead and lead-copper alloys with sulfur.
The artificial production of chemical substances already was a central goal for medieval alchemists. Examples include the synthesis of ammonium chloride from organic substances as described in the works (c. 850–950) attributed to Jābir ibn Ḥayyān, or the production of mineral acids such as sulfuric and nitric acids by later alchemists, starting from c. 1300. The production of mineral acids involved the heating of sulfate and nitrate minerals such as copper sulfate, alum and saltpeter. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid and sodium chloride. With the development of the lead chamber process in 1746 and the Leblanc process, allowing large-scale production of sulfuric acid and sodium carbonate, respectively, chemical reactions became implemented into the industry. Further optimization of sulfuric acid technology resulted in the contact process in the 1880s, and the Haber process was developed in 1909–1910 for ammonia synthesis.
From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, and Isaac Newton tried to establish theories of experimentally observed chemical transformations. The phlogiston theory was proposed in 1667 by Johann Joachim Becher. It postulated the existence of a fire-like element called "phlogiston", which was contained within combustible bodies and released during combustion. This proved to be false in 1785 by Antoine Lavoisier who found the correct explanation of the combustion as a reaction with oxygen from the air.
Joseph Louis Gay-Lussac recognized in 1808 that gases always react in a certain relationship with each other. Based on this idea and the atomic theory of John Dalton, Joseph Proust had developed the law of definite proportions, which later resulted in the concepts of stoichiometry and chemical equations.
Regarding the organic chemistry, it was long believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force" and distinguished from inorganic materials. This separation was ended however by the synthesis of urea from inorganic precursors by Friedrich Wöhler in 1828. Other chemists who brought major contributions to organic chemistry include Alexander William Williamson with his synthesis of ethers and Christopher Kelk Ingold, who, among many discoveries, established the mechanisms of substitution reactions.
Characteristics
The general characteristics of chemical reactions are:
Evolution of a gas
Formation of a precipitate
Change in temperature
Change in state
Equations
Chemical equations are used to graphically illustrate chemical reactions. They consist of chemical or structural formulas of the reactants on the left and those of the products on the right. They are separated by an arrow (→) which indicates the direction and type of the reaction; the arrow is read as the word "yields". The tip of the arrow points in the direction in which the reaction proceeds. A double arrow () pointing in opposite directions is used for equilibrium reactions. Equations should be balanced according to the stoichiometry, the number of atoms of each species should be the same on both sides of the equation. This is achieved by scaling the number of involved molecules (A, B, C and D in a schematic example below) by the appropriate integers a, b, c and d.
More elaborate reactions are represented by reaction schemes, which in addition to starting materials and products show important intermediates or transition states. Also, some relatively minor additions to the reaction can be indicated above the reaction arrow; examples of such additions are water, heat, illumination, a catalyst, etc. Similarly, some minor products can be placed below the arrow, often with a minus sign.
Retrosynthetic analysis can be applied to design a complex synthesis reaction. Here the analysis starts from the products, for example by splitting selected chemical bonds, to arrive at plausible initial reagents. A special arrow (⇒) is used in retro reactions.
Elementary reactions
The elementary reaction is the smallest division into which a chemical reaction can be decomposed, it has no intermediate products. Most experimentally observed reactions are built up from many elementary reactions that occur in parallel or sequentially. The actual sequence of the individual elementary reactions is known as reaction mechanism. An elementary reaction involves a few molecules, usually one or two, because of the low probability for several molecules to meet at a certain time.
The most important elementary reactions are unimolecular and bimolecular reactions. Only one molecule is involved in a unimolecular reaction; it is transformed by isomerization or a dissociation into one or more other molecules. Such reactions require the addition of energy in the form of heat or light. A typical example of a unimolecular reaction is the cis–trans isomerization, in which the cis-form of a compound converts to the trans-form or vice versa.
In a typical dissociation reaction, a bond in a molecule splits (ruptures) resulting in two molecular fragments. The splitting can be homolytic or heterolytic. In the first case, the bond is divided so that each product retains an electron and becomes a neutral radical. In the second case, both electrons of the chemical bond remain with one of the products, resulting in charged ions. Dissociation plays an important role in triggering chain reactions, such as hydrogen–oxygen or polymerization reactions.
AB -> A + B
Dissociation of a molecule AB into fragments A and B
For bimolecular reactions, two molecules collide and react with each other. Their merger is called chemical synthesis or an addition reaction.
A + B -> AB
Another possibility is that only a portion of one molecule is transferred to the other molecule. This type of reaction occurs, for example, in redox and acid-base reactions. In redox reactions, the transferred particle is an electron, whereas in acid-base reactions it is a proton. This type of reaction is also called metathesis.
HA + B -> A + HB
for example
NaCl + AgNO3 -> NaNO3 + AgCl(v)
Chemical equilibrium
Most chemical reactions are reversible; that is, they can and do run in both directions. The forward and reverse reactions are competing with each other and differ in reaction rates. These rates depend on the concentration and therefore change with the time of the reaction: the reverse rate gradually increases and becomes equal to the rate of the forward reaction, establishing the so-called chemical equilibrium. The time to reach equilibrium depends on parameters such as temperature, pressure, and the materials involved, and is determined by the minimum free energy. In equilibrium, the Gibbs free energy of reaction must be zero. The pressure dependence can be explained with the Le Chatelier's principle. For example, an increase in pressure due to decreasing volume causes the reaction to shift to the side with fewer moles of gas.
The reaction yield stabilizes at equilibrium but can be increased by removing the product from the reaction mixture or changed by increasing the temperature or pressure. A change in the concentrations of the reactants does not affect the equilibrium constant but does affect the equilibrium position.
Thermodynamics
Chemical reactions are determined by the laws of thermodynamics. Reactions can proceed by themselves if they are exergonic, that is if they release free energy. The associated free energy change of the reaction is composed of the changes of two different thermodynamic quantities, enthalpy and entropy:
.
: free energy, : enthalpy, : temperature, : entropy, : difference (change between original and product)
Reactions can be exothermic, where ΔH is negative and energy is released. Typical examples of exothermic reactions are combustion, precipitation and crystallization, in which ordered solids are formed from disordered gaseous or liquid phases. In contrast, in endothermic reactions, heat is consumed from the environment. This can occur by increasing the entropy of the system, often through the formation of gaseous or dissolved reaction products, which have higher entropy. Since the entropy term in the free-energy change increases with temperature, many endothermic reactions preferably take place at high temperatures. On the contrary, many exothermic reactions such as crystallization occur preferably at lower temperatures. A change in temperature can sometimes reverse the sign of the enthalpy of a reaction, as for the carbon monoxide reduction of molybdenum dioxide:
2CO(g) + MoO2(s) -> 2CO2(g) + Mo(s);
This reaction to form carbon dioxide and molybdenum is endothermic at low temperatures, becoming less so with increasing temperature. ΔH° is zero at , and the reaction becomes exothermic above that temperature.
Changes in temperature can also reverse the direction tendency of a reaction. For example, the water gas shift reaction
CO(g) + H2O({v}) <=> CO2(g) + H2(g)
is favored by low temperatures, but its reverse is favored by high temperatures. The shift in reaction direction tendency occurs at .
Reactions can also be characterized by their internal energy change, which takes into account changes in the entropy, volume and chemical potentials. The latter depends, among other things, on the activities of the involved substances.
: internal energy, : entropy, : pressure, : chemical potential, : number of molecules, : small change sign
Kinetics
The speed at which reactions take place is studied by reaction kinetics. The rate depends on various parameters, such as:
Reactant concentrations, which usually make the reaction happen at a faster rate if raised through increased collisions per unit of time. Some reactions, however, have rates that are independent of reactant concentrations, due to a limited number of catalytic sites. These are called zero order reactions.
Surface area available for contact between the reactants, in particular solid ones in heterogeneous systems. Larger surface areas lead to higher reaction rates.
Pressure – increasing the pressure decreases the volume between molecules and therefore increases the frequency of collisions between the molecules.
Activation energy, which is defined as the amount of energy required to make the reaction start and carry on spontaneously. Higher activation energy implies that the reactants need more energy to start than a reaction with lower activation energy.
Temperature, which hastens reactions if raised, since higher temperature increases the energy of the molecules, creating more collisions per unit of time,
The presence or absence of a catalyst. Catalysts are substances that make weak bonds with reactants or intermediates and change the pathway (mechanism) of a reaction which in turn increases the speed of a reaction by lowering the activation energy needed for the reaction to take place. A catalyst is not destroyed or changed during a reaction, so it can be used again.
For some reactions, the presence of electromagnetic radiation, most notably ultraviolet light, is needed to promote the breaking of bonds to start the reaction. This is particularly true for reactions involving radicals.
Several theories allow calculating the reaction rates at the molecular level. This field is referred to as reaction dynamics. The rate v of a first-order reaction, which could be the disintegration of a substance A, is given by:
Its integration yields:
Here k is the first-order rate constant, having dimension 1/time, [A](t) is the concentration at a time t and [A]0 is the initial concentration. The rate of a first-order reaction depends only on the concentration and the properties of the involved substance, and the reaction itself can be described with a characteristic half-life. More than one time constant is needed when describing reactions of higher order. The temperature dependence of the rate constant usually follows the Arrhenius equation:
where Ea is the activation energy and kB is the Boltzmann constant. One of the simplest models of reaction rate is the collision theory. More realistic models are tailored to a specific problem and include the transition state theory, the calculation of the potential energy surface, the Marcus theory and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory.
Reaction types
Four basic types
Synthesis
In a synthesis reaction, two or more simple substances combine to form a more complex substance. These reactions are in the general form:
A + B->AB
Two or more reactants yielding one product is another way to identify a synthesis reaction. One example of a synthesis reaction is the combination of iron and sulfur to form iron(II) sulfide:
8Fe + S8->8FeS
Another example is simple hydrogen gas combined with simple oxygen gas to produce a more complex substance, such as water.
Decomposition
A decomposition reaction is when a more complex substance breaks down into its more simple parts. It is thus the opposite of a synthesis reaction and can be written as
AB->A + B
One example of a decomposition reaction is the electrolysis of water to make oxygen and hydrogen gas:
2H2O->2H2 + O2
Single displacement
In a single displacement reaction, a single uncombined element replaces another in a compound; in other words, one element trades places with another element in a compound These reactions come in the general form of:
A + BC->AC + B
One example of a single displacement reaction is when magnesium replaces hydrogen in water to make solid magnesium hydroxide and hydrogen gas:
Mg + 2H2O->Mg(OH)2 (v) + H2 (^)
Double displacement
In a double displacement reaction, the anions and cations of two compounds switch places and form two entirely different compounds. These reactions are in the general form:
AB + CD->AD + CB
For example, when barium chloride (BaCl2) and magnesium sulfate (MgSO4) react, the SO42− anion switches places with the 2Cl− anion, giving the compounds BaSO4 and MgCl2.
Another example of a double displacement reaction is the reaction of lead(II) nitrate with potassium iodide to form lead(II) iodide and potassium nitrate:
Pb(NO3)2 + 2KI->PbI2(v) + 2KNO3
Forward and backward reactions
According to Le Chatelier's Principle, reactions may proceed in the forward or reverse direction until they end or reach equilibrium.
Forward reactions
Reactions that proceed in the forward direction (from left to right) to approach equilibrium are often called spontaneous reactions, that is, is negative, which means that if they occur at constant temperature and pressure, they decrease the Gibbs free energy of the reaction. They require less energy to proceed in the forward direction. Reactions are usually written as forward reactions in the direction in which they are spontaneous. Examples:
Reaction of hydrogen and oxygen to form water.
+
Dissociation of acetic acid in water into acetate ions and hydronium ions.
+ +
Backward reactions
Reactions that proceed in the backward direction to approach equilibrium are often called non-spontaneous reactions, that is, is positive, which means that if they occur at constant temperature and pressure, they increase the Gibbs free energy of the reaction. They require input of energy to proceed in the forward direction. Examples include:
Charging a normal DC battery (consisting of electrolytic cells) from an external electrical power source
Photosynthesis driven by absorption of electromagnetic radiation usually in the form of sunlight
+ + → +
Combustion
In a combustion reaction, an element or compound reacts with an oxidant, usually oxygen, often producing energy in the form of heat or light. Combustion reactions frequently involve a hydrocarbon. For instance, the combustion of 1 mole (114 g) of octane in oxygen
C8H18(l) + 25/2 O2(g)->8CO2 + 9H2O(l)
releases 5500 kJ. A combustion reaction can also result from carbon, magnesium or sulfur reacting with oxygen.
2Mg(s) + O2->2MgO(s)
S(s) + O2(g)->SO2(g)
Oxidation and reduction
Redox reactions can be understood in terms of the transfer of electrons from one involved species (reducing agent) to another (oxidizing agent). In this process, the former species is oxidized and the latter is reduced. Though sufficient for many purposes, these descriptions are not precisely correct. Oxidation is better defined as an increase in oxidation state of atoms and reduction as a decrease in oxidation state. In practice, the transfer of electrons will always change the oxidation state, but there are many reactions that are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).
In the following redox reaction, hazardous sodium metal reacts with toxic chlorine gas to form the ionic compound sodium chloride, or common table salt:
2Na(s) + Cl2(g)->2NaCl(s)
In the reaction, sodium metal goes from an oxidation state of 0 (a pure element) to +1: in other words, the sodium lost one electron and is said to have been oxidized. On the other hand, the chlorine gas goes from an oxidation of 0 (also a pure element) to −1: the chlorine gains one electron and is said to have been reduced. Because the chlorine is the one reduced, it is considered the electron acceptor, or in other words, induces oxidation in the sodium – thus the chlorine gas is considered the oxidizing agent. Conversely, the sodium is oxidized or is the electron donor, and thus induces a reduction in the other species and is considered the reducing agent.
Which of the involved reactants would be a reducing or oxidizing agent can be predicted from the electronegativity of their elements. Elements with low electronegativities, such as most metals, easily donate electrons and oxidize – they are reducing agents. On the contrary, many oxides or ions with high oxidation numbers of their non-oxygen atoms, such as , , , , or , can gain one or two extra electrons and are strong oxidizing agents.
For some main-group elements the number of electrons donated or accepted in a redox reaction can be predicted from the electron configuration of the reactant element. Elements try to reach the low-energy noble gas configuration, and therefore alkali metals and halogens will donate and accept one electron, respectively. Noble gases themselves are chemically inactive.
The overall redox reaction can be balanced by combining the oxidation and reduction half-reactions multiplied by coefficients such that the number of electrons lost in the oxidation equals the number of electrons gained in the reduction.
An important class of redox reactions are the electrolytic electrochemical reactions, where electrons from the power supply at the negative electrode are used as the reducing agent and electron withdrawal at the positive electrode as the oxidizing agent. These reactions are particularly important for the production of chemical elements, such as chlorine or aluminium. The reverse process, in which electrons are released in redox reactions and chemical energy is converted to electrical energy, is possible and used in batteries.
Complexation
In complexation reactions, several ligands react with a metal atom to form a coordination complex. This is achieved by providing lone pairs of the ligand into empty orbitals of the metal atom and forming dipolar bonds. The ligands are Lewis bases, they can be both ions and neutral molecules, such as carbon monoxide, ammonia or water. The number of ligands that react with a central metal atom can be found using the 18-electron rule, saying that the valence shells of a transition metal will collectively accommodate 18 electrons, whereas the symmetry of the resulting complex can be predicted with the crystal field theory and ligand field theory. Complexation reactions also include ligand exchange, in which one or more ligands are replaced by another, and redox processes which change the oxidation state of the central metal atom.
Acid–base reactions
In the Brønsted–Lowry acid–base theory, an acid–base reaction involves a transfer of protons (H+) from one species (the acid) to another (the base). When a proton is removed from an acid, the resulting species is termed that acid's conjugate base. When the proton is accepted by a base, the resulting species is termed that base's conjugate acid. In other words, acids act as proton donors and bases act as proton acceptors according to the following equation:
\underset{acid}{HA} + \underset{base}{B} <=> \underset{conjugated\ base}{A^-} + \underset{conjugated\ acid}{HB+}
The reverse reaction is possible, and thus the acid/base and conjugated base/acid are always in equilibrium. The equilibrium is determined by the acid and base dissociation constants (Ka and Kb) of the involved substances. A special case of the acid-base reaction is the neutralization where an acid and a base, taken at the exact same amounts, form a neutral salt.
Acid-base reactions can have different definitions depending on the acid-base concept employed. Some of the most common are:
Arrhenius definition: Acids dissociate in water releasing H3O+ ions; bases dissociate in water releasing OH− ions.
Brønsted–Lowry definition: Acids are proton (H+) donors, bases are proton acceptors; this includes the Arrhenius definition.
Lewis definition: Acids are electron-pair acceptors, and bases are electron-pair donors; this includes the Brønsted-Lowry definition.
Precipitation
Precipitation is the formation of a solid in a solution or inside another solid during a chemical reaction. It usually takes place when the concentration of dissolved ions exceeds the solubility limit and forms an insoluble salt. This process can be assisted by adding a precipitating agent or by the removal of the solvent. Rapid precipitation results in an amorphous or microcrystalline residue and a slow process can yield single crystals. The latter can also be obtained by recrystallization from microcrystalline salts.
Solid-state reactions
Reactions can take place between two solids. However, because of the relatively small diffusion rates in solids, the corresponding chemical reactions are very slow in comparison to liquid and gas phase reactions. They are accelerated by increasing the reaction temperature and finely dividing the reactant to increase the contacting surface area.
Reactions at the solid/gas interface
The reaction can take place at the solid|gas interface, surfaces at very low pressure such as ultra-high vacuum. Via scanning tunneling microscopy, it is possible to observe reactions at the solid|gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid|gas interface are in some cases related to catalysis.
Photochemical reactions
In photochemical reactions, atoms and molecules absorb energy (photons) of the illumination light and convert it into an excited state. They can then release this energy by breaking chemical bonds, thereby producing radicals. Photochemical reactions include hydrogen–oxygen reactions, radical polymerization, chain reactions and rearrangement reactions.
Many important processes involve photochemistry. The premier example is photosynthesis, in which most plants use solar energy to convert carbon dioxide and water into glucose, disposing of oxygen as a side-product. Humans rely on photochemistry for the formation of vitamin D, and vision is initiated by a photochemical reaction of rhodopsin. In fireflies, an enzyme in the abdomen catalyzes a reaction that results in bioluminescence. Many significant photochemical reactions, such as ozone formation, occur in the Earth atmosphere and constitute atmospheric chemistry.
Catalysis
In catalysis, the reaction does not proceed directly, but through a reaction with a third substance known as catalyst. Although the catalyst takes part in the reaction, forming weak bonds with reactants or intermediates, it is returned to its original state by the end of the reaction and so is not consumed. However, it can be inhibited, deactivated or destroyed by secondary processes. Catalysts can be used in a different phase (heterogeneous) or in the same phase (homogeneous) as the reactants. In heterogeneous catalysis, typical secondary processes include coking where the catalyst becomes covered by polymeric side products. Additionally, heterogeneous catalysts can dissolve into the solution in a solid-liquid system or evaporate in a solid–gas system. Catalysts can only speed up the reaction – chemicals that slow down the reaction are called inhibitors. Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons. With a catalyst, a reaction that is kinetically inhibited by high activation energy can take place in the circumvention of this activation energy.
Heterogeneous catalysts are usually solids, powdered in order to maximize their surface area. Of particular importance in heterogeneous catalysis are the platinum group metals and other transition metals, which are used in hydrogenations, catalytic reforming and in the synthesis of commodity chemicals such as nitric acid and ammonia. Acids are an example of a homogeneous catalyst, they increase the nucleophilicity of carbonyls, allowing a reaction that would not otherwise proceed with electrophiles. The advantage of homogeneous catalysts is the ease of mixing them with the reactants, but they may also be difficult to separate from the products. Therefore, heterogeneous catalysts are preferred in many industrial processes.
Reactions in organic chemistry
In organic chemistry, in addition to oxidation, reduction or acid-base reactions, a number of other reactions can take place which involves covalent bonds between carbon atoms or carbon and heteroatoms (such as oxygen, nitrogen, halogens, etc.). Many specific reactions in organic chemistry are name reactions designated after their discoverers.
One of the most industrially important reactions is the cracking of heavy hydrocarbons at oil refineries to create smaller, simpler molecules. This process is used to manufacture gasoline. Specific types of organic reactions may be grouped by their reaction mechanisms (particularly substitution, addition and elimination) or by the types of products they produce (for example, methylation, polymerisation and halogenation).
Substitution
In a substitution reaction, a functional group in a particular chemical compound is replaced by another group. These reactions can be distinguished by the type of substituting species into a nucleophilic, electrophilic or radical substitution.
In the first type, a nucleophile, an atom or molecule with an excess of electrons and thus a negative charge or partial charge, replaces another atom or part of the "substrate" molecule. The electron pair from the nucleophile attacks the substrate forming a new bond, while the leaving group departs with an electron pair. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. Examples of nucleophiles are hydroxide ion, alkoxides, amines and halides. This type of reaction is found mainly in aliphatic hydrocarbons, and rarely in aromatic hydrocarbon. The latter have high electron density and enter nucleophilic aromatic substitution only with very strong electron withdrawing groups. Nucleophilic substitution can take place by two different mechanisms, SN1 and SN2. In their names, S stands for substitution, N for nucleophilic, and the number represents the kinetic order of the reaction, unimolecular or bimolecular.
The SN1 reaction proceeds in two steps. First, the leaving group is eliminated creating a carbocation. This is followed by a rapid reaction with the nucleophile.
In the SN2 mechanisms, the nucleophile forms a transition state with the attacked molecule, and only then the leaving group is cleaved. These two mechanisms differ in the stereochemistry of the products. SN1 leads to the non-stereospecific addition and does not result in a chiral center, but rather in a set of geometric isomers (cis/trans). In contrast, a reversal (Walden inversion) of the previously existing stereochemistry is observed in the SN2 mechanism.
Electrophilic substitution is the counterpart of the nucleophilic substitution in that the attacking atom or molecule, an electrophile, has low electron density and thus a positive charge. Typical electrophiles are the carbon atom of carbonyl groups, carbocations or sulfur or nitronium cations. This reaction takes place almost exclusively in aromatic hydrocarbons, where it is called electrophilic aromatic substitution. The electrophile attack results in the so-called σ-complex, a transition state in which the aromatic system is abolished. Then, the leaving group, usually a proton, is split off and the aromaticity is restored. An alternative to aromatic substitution is electrophilic aliphatic substitution. It is similar to the nucleophilic aliphatic substitution and also has two major types, SE1 and SE2
In the third type of substitution reaction, radical substitution, the attacking particle is a radical. This process usually takes the form of a chain reaction, for example in the reaction of alkanes with halogens. In the first step, light or heat disintegrates the halogen-containing molecules producing radicals. Then the reaction proceeds as an avalanche until two radicals meet and recombine.
X. + R-H -> X-H + R.
R. + X2 -> R-X + X.
Reactions during the chain reaction of radical substitution
Addition and elimination
The addition and its counterpart, the elimination, are reactions that change the number of substituents on the carbon atom, and form or cleave multiple bonds. Double and triple bonds can be produced by eliminating a suitable leaving group. Similar to the nucleophilic substitution, there are several possible reaction mechanisms that are named after the respective reaction order. In the E1 mechanism, the leaving group is ejected first, forming a carbocation. The next step, the formation of the double bond, takes place with the elimination of a proton (deprotonation). The leaving order is reversed in the E1cb mechanism, that is the proton is split off first. This mechanism requires the participation of a base. Because of the similar conditions, both reactions in the E1 or E1cb elimination always compete with the SN1 substitution.
The E2 mechanism also requires a base, but there the attack of the base and the elimination of the leaving group proceed simultaneously and produce no ionic intermediate. In contrast to the E1 eliminations, different stereochemical configurations are possible for the reaction product in the E2 mechanism, because the attack of the base preferentially occurs in the anti-position with respect to the leaving group. Because of the similar conditions and reagents, the E2 elimination is always in competition with the SN2-substitution.
The counterpart of elimination is an addition where double or triple bonds are converted into single bonds. Similar to substitution reactions, there are several types of additions distinguished by the type of the attacking particle. For example, in the electrophilic addition of hydrogen bromide, an electrophile (proton) attacks the double bond forming a carbocation, which then reacts with the nucleophile (bromine). The carbocation can be formed on either side of the double bond depending on the groups attached to its ends, and the preferred configuration can be predicted with the Markovnikov's rule. This rule states that "In the heterolytic addition of a polar molecule to an alkene or alkyne, the more electronegative (nucleophilic) atom (or part) of the polar molecule becomes attached to the carbon atom bearing the smaller number of hydrogen atoms."
If the addition of a functional group takes place at the less substituted carbon atom of the double bond, then the electrophilic substitution with acids is not possible. In this case, one has to use the hydroboration–oxidation reaction, wherein the first step, the boron atom acts as electrophile and adds to the less substituted carbon atom. In the second step, the nucleophilic hydroperoxide or halogen anion attacks the boron atom.
While the addition to the electron-rich alkenes and alkynes is mainly electrophilic, the nucleophilic addition plays an important role in the carbon-heteroatom multiple bonds, and especially its most important representative, the carbonyl group. This process is often associated with elimination so that after the reaction the carbonyl group is present again. It is, therefore, called an addition-elimination reaction and may occur in carboxylic acid derivatives such as chlorides, esters or anhydrides. This reaction is often catalyzed by acids or bases, where the acids increase the electrophilicity of the carbonyl group by binding to the oxygen atom, whereas the bases enhance the nucleophilicity of the attacking nucleophile.
Nucleophilic addition of a carbanion or another nucleophile to the double bond of an alpha, beta-unsaturated carbonyl compound can proceed via the Michael reaction, which belongs to the larger class of conjugate additions. This is one of the most useful methods for the mild formation of C–C bonds.
Some additions which can not be executed with nucleophiles and electrophiles can be succeeded with free radicals. As with the free-radical substitution, the radical addition proceeds as a chain reaction, and such reactions are the basis of the free-radical polymerization.
Other organic reaction mechanisms
In a rearrangement reaction, the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. These include hydride shift reactions such as the Wagner-Meerwein rearrangement, where a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. Most rearrangements are associated with the breaking and formation of new carbon-carbon bonds. Other examples are sigmatropic reaction such as the Cope rearrangement.
Cyclic rearrangements include cycloadditions and, more generally, pericyclic reactions, wherein two or more double bond-containing molecules form a cyclic molecule. An important example of cycloaddition reaction is the Diels–Alder reaction (the so-called [4+2] cycloaddition) between a conjugated diene and a substituted alkene to form a substituted cyclohexene system.
Whether a certain cycloaddition would proceed depends on the electronic orbitals of the participating species, as only orbitals with the same sign of wave function will overlap and interact constructively to form new bonds. Cycloaddition is usually assisted by light or heat. These perturbations result in a different arrangement of electrons in the excited state of the involved molecules and therefore in different effects. For example, the [4+2] Diels-Alder reactions can be assisted by heat whereas the [2+2] cycloaddition is selectively induced by light. Because of the orbital character, the potential for developing stereoisomeric products upon cycloaddition is limited, as described by the Woodward–Hoffmann rules.
Biochemical reactions
Biochemical reactions are mainly controlled by complex proteins called enzymes, which are usually specialized to catalyze only a single, specific reaction. The reaction takes place in the active site, a small part of the enzyme which is usually found in a cleft or pocket lined by amino acid residues, and the rest of the enzyme is used mainly for stabilization. The catalytic action of enzymes relies on several mechanisms including the molecular shape ("induced fit"), bond strain, proximity and orientation of molecules relative to the enzyme, proton donation or withdrawal (acid/base catalysis), electrostatic interactions and many others.
The biochemical reactions that occur in living organisms are collectively known as metabolism. Among the most important of its mechanisms is the anabolism, in which different DNA and enzyme-controlled processes result in the production of large molecules such as proteins and carbohydrates from smaller units. Bioenergetics studies the sources of energy for such reactions. Important energy sources are glucose and oxygen, which can be produced by plants via photosynthesis or assimilated from food and air, respectively. All organisms use this energy to produce adenosine triphosphate (ATP), which can then be used to energize other reactions. Decomposition of organic material by fungi, bacteria and other micro-organisms is also within the scope of biochemistry.
Applications
Chemical reactions are central to chemical engineering, where they are used for the synthesis of new compounds from natural raw materials such as petroleum, mineral ores, and oxygen in air. It is essential to make the reaction as efficient as possible, maximizing the yield and minimizing the number of reagents, energy inputs and waste. Catalysts are especially helpful for reducing the energy required for the reaction and increasing its reaction rate.
Some specific reactions have their niche applications. For example, the thermite reaction is used to generate light and heat in pyrotechnics and welding. Although it is less controllable than the more conventional oxy-fuel welding, arc welding and flash welding, it requires much less equipment and is still used to mend rails, especially in remote areas.
Monitoring
Mechanisms of monitoring chemical reactions depend strongly on the reaction rate. Relatively slow processes can be analyzed in situ for the concentrations and identities of the individual ingredients. Important tools of real-time analysis are the measurement of pH and analysis of optical absorption (color) and emission spectra. A less accessible but rather efficient method is the introduction of a radioactive isotope into the reaction and monitoring how it changes over time and where it moves to; this method is often used to analyze the redistribution of substances in the human body. Faster reactions are usually studied with ultrafast laser spectroscopy where utilization of femtosecond lasers allows short-lived transition states to be monitored at a time scaled down to a few femtoseconds.
| Physical sciences | Chemistry | null |
6285 | https://en.wikipedia.org/wiki/Cube | Cube | In geometry, a cube or regular hexahedron is a three-dimensional solid object bounded by six congruent square faces, a type of polyhedron. It has twelve congruent edges and eight vertices. It is a type of parallelepiped, with pairs of parallel opposite faces, and more specifically a rhombohedron, with congruent edges, and a rectangular cuboid, with right angles between pairs of intersecting faces and pairs of intersecting edges. It is an example of many classes of polyhedra: Platonic solid, regular polyhedron, parallelohedron, zonohedron, and plesiohedron. The dual polyhedron of a cube is the regular octahedron.
The cube is the three-dimensional hypercube, a family of polytopes also including the two-dimensional square and four-dimensional tesseract. A cube with unit side length is the canonical unit of volume in three-dimensional space, relative to which other solid objects are measured.
The cube can be represented in many ways, one of which is the graph known as the cubical graph. It can be constructed by using the Cartesian product of graphs. The cube was discovered in antiquity. It was associated with the nature of earth by Plato, the founder of Platonic solid. It was used as the part of the Solar System, proposed by Johannes Kepler. It can be derived differently to create more polyhedrons, and it has applications to construct a new polyhedron by attaching others.
Properties
A cube is a special case of rectangular cuboid in which the edges are equal in length. Like other cuboids, every face of a cube has four vertices, each of which connects with three congruent lines. These edges form square faces, making the dihedral angle of a cube between every two adjacent squares being the interior angle of a square, 90°. Hence, the cube has six faces, twelve edges, and eight vertices. Because of such properties, it is categorized as one of the five Platonic solids, a polyhedron in which all the regular polygons are congruent and the same number of faces meet at each vertex.
Measurement and other metric properties
Given a cube with edge length . The face diagonal of a cube is the diagonal of a square , and the space diagonal of a cube is a line connecting two vertices that is not in the same face, formulated as . Both formulas can be determined by using Pythagorean theorem. The surface area of a cube is six times the area of a square:
The volume of a cuboid is the product of its length, width, and height. Because all the edges of a cube are equal in length, it is:
One special case is the unit cube, so-named for measuring a single unit of length along each edge. It follows that each face is a unit square and that the entire figure has a volume of 1 cubic unit. Prince Rupert's cube, named after Prince Rupert of the Rhine, is the largest cube that can pass through a hole cut into the unit cube, despite having sides approximately 6% longer. A polyhedron that can pass through a copy of itself of the same size or smaller is said to have the Rupert property.
A geometric problem of doubling the cube—alternatively known as the Delian problem—requires the construction of a cube with a volume twice the original by using a compass and straightedge solely. Ancient mathematicians could not solve this old problem until French mathematician Pierre Wantzel in 1837 proved it was impossible.
Relation to the spheres
With edge length , the inscribed sphere of a cube is the sphere tangent to the faces of a cube at their centroids, with radius . The midsphere of a cube is the sphere tangent to the edges of a cube, with radius . The circumscribed sphere of a cube is the sphere tangent to the vertices of a cube, with radius .
For a cube whose circumscribed sphere has radius , and for a given point in its three-dimensional space with distances from the cube's eight vertices, it is:
Symmetry
The cube has octahedral symmetry . It is composed of reflection symmetry, a symmetry by cutting into two halves by a plane. There are nine reflection symmetries: the five are cut the cube from the midpoints of its edges, and the four are cut diagonally. It is also composed of rotational symmetry, a symmetry by rotating it around the axis, from which the appearance is interchangeable. It has octahedral rotation symmetry : three axes pass through the cube's opposite faces centroid, six through the cube's opposite edges midpoints, and four through the cube's opposite vertices; each of these axes is respectively four-fold rotational symmetry (0°, 90°, 180°, and 270°), two-fold rotational symmetry (0° and 180°), and three-fold rotational symmetry (0°, 120°, and 240°).
The dual polyhedron can be obtained from each of the polyhedron's vertices tangent to a plane by the process known as polar reciprocation. One property of dual polyhedrons generally is that the polyhedron and its dual share their three-dimensional symmetry point group. In this case, the dual polyhedron of a cube is the regular octahedron, and both of these polyhedron has the same symmetry, the octahedral symmetry.
The cube is face-transitive, meaning its two squares are alike and can be mapped by rotation and reflection. It is vertex-transitive, meaning all of its vertices are equivalent and can be mapped isometrically under its symmetry. It is also edge-transitive, meaning the same kind of faces surround each of its vertices in the same or reverse order, all two adjacent faces have the same dihedral angle. Therefore, the cube is regular polyhedron because it requires those properties.
Classifications
The cube is a special case among every cuboids. As mentioned above, the cube can be represented as the rectangular cuboid with edges equal in length and all of its faces are all squares. The cube may be considered as the parallelepiped in which all of its edges are equal edges.
The cube is a plesiohedron, a special kind of space-filling polyhedron that can be defined as the Voronoi cell of a symmetric Delone set. The plesiohedra include the parallelohedrons, which can be translated without rotating to fill a space—called honeycomb—in which each face of any of its copies is attached to a like face of another copy. There are five kinds of parallelohedra, one of which is the cuboid. Every three-dimensional parallelohedron is zonohedron, a centrally symmetric polyhedron whose faces are centrally symmetric polygons,
Construction
An elementary way to construct a cube is using its net, an arrangement of edge-joining polygons constructing a polyhedron by connecting along the edges of those polygons. Eleven nets for the cube are shown here.
In analytic geometry, a cube may be constructed using the Cartesian coordinate systems. For a cube centered at the origin, with edges parallel to the axes and with an edge length of 2, the Cartesian coordinates of the vertices are . Its interior consists of all points with for all . A cube's surface with center and edge length of is the locus of all points such that
The cube is Hanner polytope, because it can be constructed by using Cartesian product of three line segments. Its dual polyhedron, the regular octahedron, is constructed by direct sum of three line segments.
Representation
As a graph
According to Steinitz's theorem, the graph can be represented as the skeleton of a polyhedron; roughly speaking, a framework of a polyhedron. Such a graph has two properties. It is planar, meaning the edges of a graph are connected to every vertex without crossing other edges. It is also a 3-connected graph, meaning that, whenever a graph with more than three vertices, and two of the vertices are removed, the edges remain connected. The skeleton of a cube can be represented as the graph, and it is called the cubical graph, a Platonic graph. It has the same number of vertices and edges as the cube, twelve vertices and eight edges.
The cubical graph is a special case of hypercube graph or cube—denoted as —because it can be constructed by using the operation known as the Cartesian product of graphs. To put it in a plain, its construction involves two graphs connecting the pair of vertices with an edge to form a new graph. In the case of the cubical graph, it is the product of two ; roughly speaking, it is a graph resembling a square. In other words, the cubical graph is constructed by connecting each vertex of two squares with an edge. Notationally, the cubical graph can be denoted as . As a part of the hypercube graph, it is also an example of a unit distance graph.
Like other graphs of cuboids, the cubical graph is also classified as a prism graph.
In orthogonal projection
An object illuminated by parallel rays of light casts a shadow on a plane perpendicular to those rays, called an orthogonal projection. A polyhedron is considered equiprojective if, for some position of the light, its orthogonal projection is a regular polygon. The cube is equiprojective because, if the light is parallel to one of the four lines joining a vertex to the opposite vertex, its projection is a regular hexagon. Conventionally, the cube is 6-equiprojective.
As a configuration matrix
The cube can be represented as configuration matrix. A configuration matrix is a matrix in which the rows and columns correspond to the elements of a polyhedron as in the vertices, edges, and faces. The diagonal of a matrix denotes the number of each element that appears in a polyhedron, whereas the non-diagonal of a matrix denotes the number of the column's elements that occur in or at the row's element. As mentioned above, the cube has eight vertices, twelve edges, and six faces; each element in a matrix's diagonal is denoted as 8, 12, and 6. The first column of the middle row indicates that there are two vertices in (i.e., at the extremes of) each edge, denoted as 2; the middle column of the first row indicates that three edges meet at each vertex, denoted as 3. The following matrix is:
Appearances
In antiquity
The Platonic solid is a set of polyhedrons known since antiquity. It was named after Plato in his Timaeus dialogue, who attributed these solids with nature. One of them, the cube, represented the classical element of earth because of its stability. Euclid's Elements defined the Platonic solids, including the cube, and using these solids with the problem involving to find the ratio of the circumscribed sphere's diameter to the edge length.
Following its attribution with nature by Plato, Johannes Kepler in his Harmonices Mundi sketched each of the Platonic solids, one of them is a cube in which Kepler decorated a tree on it. In his Mysterium Cosmographicum, Kepler also proposed the Solar System by using the Platonic solids setting into another one and separating them with six spheres resembling the six planets. The ordered solids started from the innermost to the outermost: regular octahedron, regular icosahedron, regular dodecahedron, regular tetrahedron, and cube.
Polyhedron, honeycombs, and polytopes
The cube can appear in the construction of a polyhedron, and some of its types can be derived differently in the following:
When faceting a cube, meaning removing part of the polygonal faces without creating new vertices of a cube, the resulting polyhedron is the stellated octahedron.
The cube is non-composite polyhedron, meaning it is a convex polyhedron that cannot be separated into two or more regular polyhedrons. The cube can be applied to construct a new convex polyhedron by attaching another. Attaching a square pyramid to each square face of a cube produces its Kleetope, a polyhedron known as the tetrakis hexahedron. Suppose one and two equilateral square pyramids are attached to their square faces. In that case, they are the construction of an elongated square pyramid and elongated square bipyramid respectively, the Johnson solid's examples.
Each of the cube's vertices can be truncated, and the resulting polyhedron is the Archimedean solid, the truncated cube. When its edges are truncated, it is a rhombicuboctahedron. Relatedly, the rhombicuboctahedron can also be constructed by separating the cube's faces and then spreading away, after which adding other triangular and square faces between them; this is known as the "expanded cube". It also can be constructed similarly by the cube's dual, the regular octahedron.
The corner region of a cube can also be truncated by a plane (e.g., spanned by the three neighboring vertices), resulting in a trirectangular tetrahedron.
The snub cube is an Archimedean solid that can be constructed by separating away the cube square's face, and filling their gaps with twisted angle equilateral triangles;a process known as snub.
The honeycomb is the space-filling or tessellation in three-dimensional space, meaning it is an object in which the construction begins by attaching any polyhedrons onto their faces without leaving a gap. The cube can be represented as the cell, and examples of a honeycomb are cubic honeycomb, order-5 cubic honeycomb, order-6 cubic honeycomb, and order-7 cubic honeycomb. The cube can be constructed with six square pyramids, tiling space by attaching their apices.
Polycube is a polyhedron in which the faces of many cubes are attached. Analogously, it can be interpreted as the polyominoes in three-dimensional space. When four cubes are stacked vertically, and the other four are attached to the second-from-top cube of the stack, the resulting polycube is Dali cross, after Salvador Dali. The Dali cross is a tile space polyhedron, which can be represented as the net of a tesseract. A tesseract is a cube analogous' four-dimensional space bounded by twenty-four squares, and it is bounded by the eight cubes known as its cells.
| Mathematics | Geometry | null |
6286 | https://en.wikipedia.org/wiki/Commuter%20rail | Commuter rail | Commuter rail or suburban rail is a passenger rail transport service that primarily operates within a metropolitan area, connecting commuters to a central city from adjacent suburbs or commuter towns. Commuter rail systems can use locomotive-hauled trains or multiple units, using electric or diesel propulsion. Distance charges or zone pricing may be used.
The term can refer to systems with a wide variety of different features and service frequencies, but is often used in contrast to rapid transit or light rail.
Some services share similarities with both commuter rail and high-frequency rapid transit; examples include German S-Bahn in some cities, the Réseau Express Régional (RER) in Paris, the S Lines in Milan, many Japanese commuter systems, the East Rail line in Hong Kong, and some Australasian suburban networks, such as Sydney Trains. Many commuter rail systems share tracks with other passenger services and freight.
In North America, commuter rail sometimes refers only to systems that primarily operate during rush hour and offer little to no service for the rest of the day, with regional rail being used to refer to systems that offer all-day service.
Characteristics
Most commuter (or suburban) trains are built to main line rail standards, differing from light rail or rapid transit (metro rail) systems by:
being larger
providing more seating and less standing room, owing to the longer distances involved
having (in most cases) a lower frequency of service
having scheduled services (i.e. trains run at specific times rather than at specific intervals)
serving lower-density suburban areas, typically connecting suburbs to the city center
sharing track or right-of-way with intercity and/or freight trains
not fully grade separated (containing at-grade crossings with crossing gates)
being able to skip certain stations as an express service due to normally being driver controlled
Train schedule
Compared to rapid transit (or metro rail), commuter/suburban rail often has lower frequency, following a schedule rather than fixed intervals, and fewer stations spaced further apart. They primarily serve lower density suburban areas (non inner-city), generally only having one or two stops in a city's central business district, and often share right-of-way with intercity or freight trains. Some services operate only during peak hours and others use fewer departures during off peak hours and weekends. Average speeds are high, often 50 km/h (30 mph) or higher. These higher speeds better serve the longer distances involved. Some services include express services which skip some stations in order to run faster and separate longer distance riders from short-distance ones.
The general range of commuter trains' travel distance varies between 15 and 200 km (10 and 125 miles), but longer distances can be covered when the trains run between two or several cities (e.g. S-Bahn in the Ruhr area of Germany). Distances between stations may vary, but are usually much longer than those of urban rail systems. In city centres the train either has a terminal station or passes through the city centre with notably fewer station stops than those of urban rail systems. Toilets are often available on-board trains and in stations.
Track
Their ability to coexist with freight or intercity services in the same right-of-way can drastically reduce system construction costs. However, frequently they are built with dedicated tracks within that right-of-way to prevent delays, especially where service densities have converged in the inner parts of the network.
Most such trains run on the local standard gauge track. Some systems may run on a narrower or broader gauge. Examples of narrow gauge systems are found in Japan, Indonesia, Malaysia, Thailand, Taiwan, Switzerland, in the Brisbane (Queensland Rail's City network) and Perth (Transperth) systems in Australia, in some systems in Sweden, and on the Genoa-Casella line in Italy. Some countries and regions, including Finland, India, Pakistan, Russia, Brazil and Sri Lanka, as well as San Francisco (BART) in the US and Melbourne and Adelaide in Australia, use broad gauge track.
Distinction between other modes of rail
Metro
Metro rail and rapid transit usually cover smaller inner-urban areas within of city centers, with shorter stop spacing, use rolling stocks with larger standing spaces, lower top speed and higher acceleration, designed for short-distance travel. They also run more frequently, to a headway rather than a published timetable and use dedicated tracks (underground or elevated), whereas commuter rail often shares tracks, technology and the legal framework within mainline railway systems, and uses rolling stocks with more seating and higher speed for comfort on longer city-suburban journeys.
However, the classification as a metro or rapid rail can be difficult as both may typically cover a metropolitan area exclusively, run on separate tracks in the centre, and often feature purpose-built rolling stock. The fact that the terminology is not standardised across countries (even across English-speaking countries) further complicates matters. This distinction is most easily made when there are two (or more) systems such as New York's subway and the LIRR and Metro-North Railroad, Paris' Métro and RER along with Transilien, Washington D.C.'s Metro along with its MARC and VRE, London's tube lines of the Underground and the Overground, Elizabeth line, Thameslink along with other commuter rail operators, Madrid's Metro and Cercanías, Barcelona's Metro and Rodalies, and Tokyo's subway and the JR lines along with various privately owned and operated commuter rail systems.
Regional rail
Regional rail usually provides rail services between towns and cities, rather than purely linking major population hubs in the way inter-city rail does. Regional rail operates outside major cities. Unlike Inter-city, it stops at most or all stations between cities. It provides a service between smaller communities along the line that are often byproducts of ribbon developments, and also connects with long-distance services at interchange stations located at junctions, terminals, or larger towns along the line. Alternative names are "local train" or "stopping train". Examples include the former BR's Regional Railways, France's TER (Transport express régional), Germany's Regionalexpress and Regionalbahn, and South Korea's Tonggeun and Mugunghwa-ho services.
Inter-city rail
In some European countries, the distinction between commuter trains and long-distance/intercity trains is subtle, due to the relatively short distances involved. For example, so-called "intercity" trains in Belgium and the Netherlands carry many commuters, while their equipment, range, and speeds are similar to those of commuter trains in some larger countries.
The United Kingdom has a privatised rail system, with different routes and services covered by different private operators. The distinction between commuter and intercity rail is not as clear as it was before privatisation (when InterCity existed as a brand of its own), but usually it is still possible to tell them apart. Some operators, for example Thameslink, focus solely on commuter services. Others, such as Avanti West Coast and LNER, run solely intercity services. Others still, such as GWR and EMR, run a mixture of commuter, regional and intercity services. Some of these operators use different branding for different types of service (for example EMR brands its trains as either "InterCity", "Connect" for London commuter services, and "Regional") but even for those operators that do not, the type of train, amenities offered, and stopping pattern, usually tell the services apart.
Russian commuter trains, on the other hand, frequently cover areas larger than Belgium itself, although these are still short distances by Russian standards. They have a different ticketing system from long-distance trains, and in major cities they often operate from a separate section of the train station.
Some consider "inter-city" service to be that which operates as an express service between two main city stations, bypassing intermediate stations. However, this term is used in Australia (Sydney for example) to describe the regional trains operating beyond the boundaries of the suburban services, even though some of these "inter-city" services stop all stations similar to German regional services. In this regard, the German service delineations and naming conventions are clearer and better used for academic purposes.
High-speed rail
Sometimes high-speed rail can serve daily use of commuters. The Japanese Shinkansen high speed rail system is heavily used by commuters in the Greater Tokyo Area, who commute between by Shinkansen. To meet the demand of commuters, JR sells commuter discount passes. Before 2021, they operated 16-car bilevel E4 Series Shinkansen trains at rush hour, providing a capacity of 1,600 seats. Several lines in China, such as the Beijing–Tianjin Intercity Railway and the Shanghai–Nanjing High-Speed Railway, serve a similar role with many more under construction or planned.
In South Korea, some sections of the high-speed rail network are also heavily used by commuters, such as the section between Gwangmyeong Station and Seoul Station on the KTX network (Gyeongbu HSR Line), or the section between Dongtan Station and Suseo station on the SRT Line.
The high-speed services linking Zurich, Bern and Basel in Switzerland () have brought the Central Business Districts (CBDs) of these three cities within 1 hour of each other. This has resulted in unexpectedly high demand for new commuter trips between the three cities and a corresponding increase in suburban rail passengers accessing the high-speed services at the main city-centre stations (). The Regional-Express commuter service between Munich and Nuremberg in Germany runs at on the Nuremberg–Ingolstadt high-speed railway.
The regional trains Stockholm–Uppsala, Stockholm–Västerås, Stockholm–Eskilstuna and Gothenburg–Trollhättan in Sweden reach and have many daily commuters.
In Great Britain, the HS1 domestic services between London and Ashford runs at a top speed of 225 km/h, and in peak hours the trains can be full with commuters standing.
The Athens Suburban Railway in Greece consists of five lines, 4 of which are electrified. The Kiato–Piraeus line and the Aigio–Airport lines reach speeds of up to . The Athens–Chalcis line is also expected to attain speeds of up to upon upgrading of the SKA–Oinoi railway sector. These lines also have many daily commuters, with the number expected to rise even higher upon full completion of the Acharnes Railway Center.
Eskişehir-Ankara and Konya-Ankara high speed train routes serve as high speed commuter trains in Turkey.
Train types
Commuter/suburban trains are usually optimized for maximum passenger volume, in most cases without sacrificing too much comfort and luggage space, though they seldom have all the amenities of long-distance trains. Cars may be single- or double-level, and aim to provide seating for all. Compared to intercity trains, they have less space, fewer amenities and limited baggage areas.
Multiple unit type
Commuter rail trains are usually composed of multiple units, which are self-propelled, bidirectional, articulated passenger rail cars with driving motors on each (or every other) bogie. Depending on local circumstances and tradition they may be powered either by diesel engines located below the passenger compartment (diesel multiple units) or by electricity picked up from third rails or overhead lines (electric multiple units). Multiple units are almost invariably equipped with control cabs at both ends, which is why such units are so frequently used to provide commuter services, due to the associated short turn-around time.
Locomotive hauled services
Locomotive hauled services are used in some countries or locations. This is often a case of asset sweating, by using a single large combined fleet for intercity and regional services. Loco hauled services are usually run in push-pull formation, that is, the train can run with the locomotive at the "front" or "rear" of the train (pushing or pulling). Trains are often equipped with a control cab at the other end of the train from the locomotive, allowing the train operator to operate the train from either end. The motive power for locomotive-hauled commuter trains may be either electric or diesel–electric, although some countries, such as Germany and some of the former Soviet-bloc countries, also use diesel–hydraulic locomotives.
Seat plans
In the US and some other countries, a three-and-two seat plan is used. Middle seats on these trains are often less popular because passengers feel crowded and uncomfortable.
In Japan, South Korea and Indonesia, longitudinal (sideways window-lining) seating is widely used in many commuter rail trains to increase capacity in rush hours. Carriages are usually not organized to increase seating capacity (although in some trains at least one carriage would feature more doors to facilitate easier boarding and alighting and bench seats so that they can be folded up during rush hour to provide more standing room) even in the case of commuting longer than 50 km and commuters in the Greater Tokyo Area, Seoul metropolitan area, and Jabodetabek area have to stand in the train for more than an hour.
Commuter rail systems around the world
Africa
Currently there are not many examples of commuter rail in Africa. Metrorail operates in the major cities of South Africa, and there are some commuter rail services in Algeria, Botswana, Kenya, Morocco, Egypt and Tunisia.
In Algeria, SNTF operates commuter rail lines between the capital Algiers and its southern and eastern suburbs. They also serve to connect Algiers' main universities to each other. The Dar es Salaam commuter rail offers intracity services in Dar es Salaam, Tanzania. In Botswana, the (Botswana Railways) "BR Express" has a commuter train between Lobatse and Gaborone.
Asia
East Asia
In Japan, commuter rail systems have extensive network and frequent service and are heavily used. In many cases, Japanese commuter rail is operationally more like a typical metro system (frequent trains, an emphasis on standing passengers, short station spacings) than it is like commuter rail in other countries. Japanese commuter rail commonly interline with city center subway lines, with commuter rail trains continuing into the subway network, and then out onto different commuter rail systems on the other side of the city. Many Japanese commuter systems operate various stopping patterns to reduce the travel time to distant locations, often using station passing loops instead of dedicated express tracks. It is notable that the larger Japanese commuter rail systems are owned and operated by for-profit private railway companies, without public subsidy.
East Japan Railway Company operates a large suburban train network in Tokyo with various lines connecting the suburban areas to the city center. While the Yamanote Line, Keihin Tohoku Line, Chūō–Sōbu Line services arguably are more akin to rapid transit with frequent stops, simple stopping patterns (relative to other JR East lines) no branching services and largely serving the inner suburbs; other services along the Chūō Rapid Line, Sōbu Rapid Line/Yokosuka Line, Ueno–Tokyo Line, Shōnan–Shinjuku Line etc. are mid-distance services from suburban lines in the outer reaches of Greater Tokyo through operating into these lines to form a high frequency corridor though central Tokyo.
Other commuter rail routes in Japan include:
Hanshin Namba Line and Kintetsu Namba Line have a busy east west underground section that allow trains from both Hanshin Electric Railway and Kintetsu Railway to access Namba, a major commercial center of Osaka, and service destinations east and west of Osaka.
Osaka Metro Sakaisuji Line is a north south line that allows Hankyu services from the Senri Line, Kyoto Main Line and Arashiyama Line to enter Osaka city center.
JR West Tozai Line is an underground east west corridor allowing trains from the Kobe Line, Takarazuka Line and Gakkentoshi Line to access Umeda in central Osaka.
JR West Osaka Loop Line is a mostly elevated loop line that allows for services from the Yamatoji Line, Hanwa Line and Sakurajima Line to loop around central Osaka.
JR West Kobe Line/Kyoto Line is a four track corridor allowing Biwako Line, Kosei Line, Takarazuka Line, San'yō Main Line and Akō Line services to service Kyoto, Osaka and Kobe.
A special private railway Kōbe Rapid Transit Railway owns two underground corridors (a north south and east west line) that allow for Sanyo Electric Railway, Hankyu railway, Hanshin Electric Railway and Kobe Electric Railway services to enter and cross Kobe city center.
Most of the trains on the Meitetsu network through operate into a high frequency trunk line on the Meitetsu Nagoya Main Line branching out to other lines on the other side of Nagoya.
Commuter rail systems have been inaugurated in several cities in China such as Beijing, Shanghai, Zhengzhou, Wuhan, Changsha and the Pearl River Delta. With plans for large systems in northeastern Zhejiang, Jingjinji, and Yangtze River Delta areas. The level of service varies considerably from line to line ranging high to near high speeds. More developed and established lines such as the Guangshen Railway have more frequent metro-like service.
The two MTR lines which are owned and formerly operated by the Kowloon-Canton Railway Corporation (East Rail line and Tuen Ma line which is integrated from the former West Rail line and Ma On Shan line in 2021), then the "KCR"), and MTR's own Tung Chung line connect the new towns in New Territories and the city centre Kowloon together with frequent intervals, and some New Territories-bound trains terminate at intermediate stations, providing more frequent services in Kowloon and the towns closer to Kowloon. They use rolling stocks with a faster maximum speed and have longer stop spacing compared to other lines which only run in the inner urban area, but in order to maximise capacity and throughput, these rolling stocks have longitudinal seatings, 5 pairs of doors in each carriage with large standing spaces like the urban lines, and run as frequent as well. Most of the sections of these four lines are overground and some sections of the East Rail Line share tracks with intercity trains to mainland China. The three KCR lines are integrated into the MTR network since 2008 and most passengers do not need to exit and re-enter the system through separate fare gates and purchase separate tickets to transfer between such lines and the rest of the network (the exceptions are between the Tuen Ma line's East Tsim Sha Tsui station and the Tsuen Wan line's Tsim Sha Tsui station.
In Taiwan, the Western line in the Taipei-Taoyuan Metropolitan Area, Taichung Metropolitan Area and Tainan-Kaohsiung Metropolitan Area as well as the Neiwan-Liujia line in the Hsinchu Area are considered commuter rail.
In South Korea, the Seoul Metropolitan Subway includes a total of 22 lines, and some of its lines are suburban lines. This is especially the case for lines operated by Korail, such as the Gyeongui-Jungang Line, the Gyeongchun Line, the Suin-Bundang Line, or the Gyeonggang Line. Even some lines not operated by Korail, such as the AREX Line, the Seohae Line or the Shinbundang Line mostly function as commuter rail. Lastly, even for the "numbered lines" (1–9) of the Seoul Metropolitan Subway which mostly travel in the dense parts of Seoul, some track sections extend far outside of the city, and operate large sections at ground level, such as on the Line 1, Line 3 and Line 4. In Busan, the Donghae Line, while part of the Busan Metro system, mostly functions as a commuter rail line.
Southeast Asia
In Indonesia, the KRL Commuterline is the largest commuter rail system in the country, serving the Greater Jakarta. It connects the Jakarta city center with surrounding cities and sub-urbans in Banten and West Java provinces, including Depok, Bogor, Tangerang, Serpong, Rangkasbitung, Bekasi and Cikarang. In July 2015, KRL Commuterline served more than 850,000 passengers per day, which is almost triple of the 2011 figures, but still less than 3.5% of all Jabodetabek commutes. Other commuter rail systems in Indonesia include the Metro Surabaya Commuter Line, Commuter Line Bandung, KAI Commuter Yogyakarta–Solo Line, Kedung Sepur, and the Sri Lelawangsa.
In the Philippines, the Philippine National Railways has two commuter rail systems currently operational; the PNR Metro Commuter Line in the Greater Manila Area and the PNR Bicol Commuter in the Bicol Region. A new commuter rail line in Metro Manila, the North–South Commuter Railway, is currently under construction. Its North section is set to be partially opened by 2021.
In Malaysia, there are two commuter services operated by Keretapi Tanah Melayu. They are the KTM Komuter that serves Kuala Lumpur and the surrounding Klang Valley area, and the KTM Komuter Northern Sector that serves Greater Penang, Perak, Kedah and Perlis in the northern region of Peninsular Malaysia.
In Thailand, the Greater Bangkok Commuter rail and the Airport Rail Link serve the Bangkok Metropolitan Region. The SRT Red Lines, a new commuter line in Bangkok, started construction in 2009. It opened in 2021.
Another commuter rail system in Southeast Asia is the Yangon Circular Railway in Myanmar.
South Asia
In India, commuter rail systems are present in major cities and form an important part of people's daily lives. Mumbai Suburban Railway, the oldest suburban rail system in Asia, carries more than 7.24 million commuters on a daily basis which constitutes more than half of the total daily passenger capacity of the Indian Railways itself. Kolkata Suburban Railway, one of the largest suburban railway networks in the world, consists of more than 450 stations and carries more than 3.5 million commuters per day. The Chennai Suburban Railway along with the Chennai MRTS, also covers over 300 stations and carries more than 2.5 million people daily to different areas in Chennai and its surroundings. Other commuter railways in India include the Hyderabad MMTS, Delhi Suburban Railway, Pune Suburban Railway and Lucknow-Kanpur Suburban Railway.
In 2020, Government of India approved Bengaluru Suburban Railway to connect Bengaluru and its suburbs. It will be unique and first of its kind in India as it will have metro like facilities and rolling stock.
In Bangladesh, there is one suburban rail called the Chittagong Circular Railway. Another suburban railway called the Dhaka Circular Railway is currently proposed.
Karachi in Pakistan has a circular railway since 1969.
West Asia
Tehran Metro currently operates the Line 5 commuter line between Tehran and Karaj.
Turkey has lines connecting Başkentray, İZBAN, Marmaray and Gaziray.
Europe
Major metropolitan areas in most European countries are usually served by extensive commuter/suburban rail systems. Well-known examples include BG Voz in Belgrade (Serbia), S-Bahn in Germany, Austria and German-speaking areas of Switzerland, Proastiakos in Greece, RER in France and Belgium, Servizio ferroviario suburbano in Italy, Cercanías and Rodalies (Catalonia) in Spain, CP Urban Services in Portugal, Esko in Prague and Ostrava (Czech Republic), HÉV in Budapest (Hungary) and DART in Dublin (Ireland).
Western Europe
London has multiple commuter rail routes:
The Elizabeth line runs on a east–west twin tunnel under central London (Crossrail project) as its central core section.
Thameslink brings together several branches from northern and southern suburbs and satellite towns in to a high frequency central tunnel underneath London.
The London Overground, by contrast, skirts through the inner suburbs with lines mostly independent of each other, although there are several branches. The Watford DC line, partly shared with underground trains, uses third rail, but parallels a main line using overhead wires. The East London line and North London line run at metro-like frequencies in inner London, which make them nearly indistinguishable from metro systems apart from the fact that the tracks are shared with freight trains.
The Metropolitan line, despite being part of the London Underground, is a commuter rail route as it links the City of London to commuter towns outside Greater London such as Rickmansworth, Amersham and Chesham, where it runs to a timetable, being the only London Underground line with a public timetable published. It also shares tracks with Chiltern Railways main line services between London and Aylesbury.
The Merseyrail network in Liverpool consists of two commuter rail routes powered by third rail, both of which branch out at one end. At the other, the Northern line continues out of the city centre to a mainline rail interchange, while the Wirral line has a city-centre loop.
Birmingham has four suburban routes which operate out of Birmingham New Street & Birmingham Moor Street stations, one of which is operated using diesel trains.
The Tyneside Electrics system in Newcastle upon Tyne existed from 1904 to 1967 using DC third rail. British Rail did not have the budget to maintain the ageing electrification system. The Riverside Branch was closed, while the remaining lines were de-electrified. 13 years later, they were re-electrified using DC overhead wires, and now form the Tyne & Wear Metro Yellow Line.
Many of the rail services around Glasgow are branded as Strathclyde Partnership for Transport. The network includes most electrified Scottish rail routes.
The West Yorkshire Passenger Transport Executive run eleven services which feed into Leeds, connecting the city with commuter areas and neighbouring urban centres in the West Yorkshire Built-up Area.
MetroWest is a proposed network in Bristol, northern Somerset & southern Gloucestershire. The four-tracking of the line between Bristol Temple Meads and Bristol Parkway stations will enable local rail services to be separated from long-distance trains.
The Réseau express régional d'Île-de-France (RER) is a commuter rail network in the agglomeration of Paris. In the centre the RER has high frequency underground corridors where several suburban branches feed similar to a rapid transit system.
Commuter rail systems in German-speaking regions are called S-Bahn. While in some major cities S-Bahn services run on separate lines exclusively other systems use the existing regional rail tracks.
In Italy fifteen cities have commuter rail systems:
Bari (Bari metropolitan railway service, 3 lines)
Bologna (Bologna metropolitan railway service, 8 lines)
Cagliari, 1 line
Catanzaro, 2 lines
Genoa (Genoa urban railway service, 3 lines)
Messina, 1 line
Milan (Milan suburban railway service, 12 lines)
Naples, 8 lines
Palermo (Palermo metropolitan railway service, 2 lines)
Perugia, 1 line
Potenza, 1 line
Reggio Calabria, 1 line
Rome (FL lines, 8 lines)
Salerno (Salerno metropolitan railway service, 1 line)
Turin (Turin metropolitan railway service, 8 lines)
Randstadspoor is a network of Sprinter train services in and around the city of Utrecht in the Netherlands. For the realisation of this network, new stations were opened. Separate tracks have been built for these trains, so they can call frequently without disturbing high-frequent Intercity services parallel to these routes. Similar systems are planned for The Hague and Rotterdam.
Northern Europe
In Sweden, electrified commuter rail systems known as Pendeltåg are present in the cities of Stockholm and Gothenburg. The Stockholm commuter rail system, which began in 1968, shares railway tracks with inter-city trains and freight trains, but for the most part runs on its own dedicated tracks. It is primarily used to transport passengers from nearby towns and other suburban areas into the city centre, not for transportation inside the city centre. The Gothenburg commuter rail system, which began in 1960, is similar to the Stockholm system, but does fully share tracks with long-distance trains.
In Norway, the Oslo commuter rail system is from 2022 more limited but the remaining commuter lines go on tracks mostly not much used by other trains. From 2022 several lines with hourly frequency and travel times to endpoints of over one hour are redefined as regional trains. Before 2022 Oslo had the largest commuter rail system in the Nordic countries in terms of line lengths and number of stations. Also Bergen, Stavanger and Trondheim have commuter rail systems. These have only one or two lines each and they share tracks with other trains.
In Finland, the Helsinki commuter rail network runs on dedicated tracks from Helsinki Central railway station to Leppävaara and Kerava. The Ring Rail Line serves Helsinki Airport and northern suburbs of Vantaa and is exclusively used by the commuter rail network. On 15 December 2019, the Tampere region got its own commuter rail service, with trains running from Tampere to Nokia, Lempäälä and Orivesi.
Southern Europe
In Spain, Cercanías networks exist in Madrid, Sevilla, Murcia/Alicante, San Sebastián, Cádiz, Valencia, Asturias, Santander, Zaragoza, Bilbao and Málaga. All these systems include underground sections in the city centre. There is also a network of narrow-gauge commuter systems in North Spain and Murcia.
Cercanías Madrid is one of the most important train services in the country, more than 900,000 passengers move in the system. It has underground stations in Madrid like Recoletos, Sol or Nuevos Ministerios and in the metropolitan area in cities like Parla or Getafe.
In the autonomous community of Catalonia, and unlike the rest of Spain, the commuter service is not managed by Renfe Operadora. Since 2010, the Government of Catalonia has managed all the regular commuter services with the "transfer of Rodalies". There are two companies that manage the Catalan commuter network:
Rodalies de Catalunya, which after the transfer at the beginning of 2010 when, due to the "Catalan rail chaos" of 2007, the Spanish government promised to transfer the Renfe commuter service to the Generalitat, although it does not deal with the entire service; After the transfer, responsibilities for the commuter trains were divided into three parts: the Generalitat (management, regulation, planning, coordination and inspection of services and activities and power to charge), Renfe (train operator and its maintenance), and Adif (owner of the railway infrastructure). Lines R1, R2, R2 Nord, R2 Sud, R3 (to Sant Quirze de Besora, from there to Puigcerdà or La Tor de Querol it is considered a regional route), R4, R7 and R8 run through Rodalies de Catalunya, all on Iberian gauge (1668 mm).
Ferrocarrils de la Generalitat de Catalunya (or FGC) is the railway company responsible for the Vallès, Llobregat-Anoia and Lleida-La Pobla de Segur lines. This company is mainly in charge of metro and suburban lines, although it also has five commuter lines spread over two lines, four on the Llobregat-Anoia line (R5, R50, R6, R60) on metre-gauge (1000 mm) and a single line on the Lleida-La Pobla de Segur line (RL1) on Iberian gauge (1668 mm). FGC is in charge of the entire service, unlike Rodalies de Catalunya, which is not in charge of either the trains or the infrastructure.
The Government of Catalonia will assume full control of the current R12 regional line in 2024 and it will be owned by the FGC. It will eliminate the current line and replace it with the new commuter lines RL3 and RL4, towards Cervera and Manresa from Lleida respectively.
In Italy there are several commuter rail networks:
Roman FL lines cover most of the Latium regional railways.
Milan suburban railway service, operated by Trenord, has numerous services funneling into the underground Milan Passante railway.
Turin metropolitan railway service, operated by Trenitalia and GTT, with an underground railway line running through the city used by most services.
Naples Metro Line 2 is an underground corridor where commuter rail services operated by Trenitalia traverse and service the urban center.
Genoa urban railway service consists in three lines passing through the Giovi railway line and the Tyrrhenian railway lines Genoa–Ventimiglia and Genoa–Pisa.
Bologna metropolitan railway service. The system comprises 8 lines.
Bari metropolitan railway service
Canton Tessin suburban railway reaches Italian cities like Como and Varese and the Malpensa Airport.
Eastern Europe
In Poland, commuter rail systems exist in Tricity, Warsaw, Kraków (SKA) and Katowice (SKR). There is also a similar system planned in Wrocław and Szczecin. The terms used are "Szybka Kolej Miejska" (fast urban rail) and "kolej aglomeracyjna" (agglomeration rail). These systems are:
Szybka Kolej Miejska w Warszawie in the Warsaw urban area, with 4 lines and 46 stations.
Łódzka Kolej Aglomeracyjna is located in the center of Poland connecting satellite towns in and around Łódź. It also operates some trains between Łódź and Warsaw.
Szybka Kolej Miejska w Trójmieście is located in the Tricity/Trójmiasto urban area, the three cities of Gdańsk, Gdynia and Sopot.
The Proastiakos (; "suburban") is Greece's suburban railway (commuter rail) services, which are run by TrainOSE, on infrastructure owned by the Hellenic Railways Organisation (OSE). There are three Proastiakos networks, servicing the country's three largest cities: Athens, Thessaloniki and Patras. In particular, the Athenian network is undergoing modifications to completely separate it from mainline traffic, by re-routing the tracks via a tunnel underneath the city center. A similar project is planned for the Patras network, whereas a new line is due to be constructed for the Thessalonian network.
In Romania, the first commuter trains were introduced in December 2019. They operate between Bucharest and Funduea or Buftea.
BG Voz is an urban rail system that serves Belgrade. It currently has only two routes, with plans for further expansion. Between the early 1990s and mid-2010s, there was another system, known as Beovoz, that was used to provide mass-transit service within the Belgrade metropolitan area, as well as to nearby towns, similarly to RER in Paris. Beovoz had more lines and far more stops than the current system. However, it was abandoned in favor of more accurate BG Voz, mostly due to inefficiency. While current services rely mostly on the existing infrastructure, any further development means furthering capacities (railways expansion and new trains). Plans for further extension of system include another two lines, one of which should reach Belgrade Nikola Tesla Airport.
In Russia, Ukraine and some other countries of the former Soviet Union, electrical multiple unit passenger suburban trains called Elektrichka are widespread. The first such system in Russia is the Oranienbaum Electric Line in St. Petersburg. In Moscow the Beskudnikovskaya railway branch existed between the 1940s and 1980s. The trains that shuttled along it did not go to the main lines, so it was a city transport. Today there are the Moscow Central Circle and the Moscow Central Diameters.
In Turkey, Marmaray line stations from Sirkeci to Halkalı are located at the European side.
Americas
North America
In the United States, Canada, Costa Rica, El Salvador and Mexico regional passenger rail services are provided by governmental or quasi-governmental agencies, with the busiest and most expansive rail networks located in the Northeastern US, California, and Eastern Canada. Most North American commuter railways utilize diesel locomotive propulsion, with the exception of services in New York City, Philadelphia, Chicago, Denver, and Mexico City; New York's commuter rail lines use a combination of third rail and overhead wire power generation, while Chicago only has two out of twelve services that are electrified. Many newer and proposed systems in Canada and the United States are often are geared to serving peak-hour commutes as opposed to the all-day systems of Europe, East Asia, and Australia.
United States
Eight commuter rail systems in the United States carried over ten million trips each in 2018, those being in descending order:
Metropolitan Transportation Authority's Long Island Rail Road, serving New York City and Long Island
NJ Transit Rail Operations, serving New York City, New Jersey (Newark, Trenton) and Philadelphia
Metropolitan Transportation Authority's Metro-North Railroad, serving New York (Yonkers and New York City) and Southwest Connecticut (New Haven)
Metra, serving northeast Illinois (Chicago) and Kenosha, Wisconsin. The network consists of 11 services, of which only the Electric District service runs on tracks exclusively used for passenger traffic.
The South Shore Line is a commuter line that serves the South Side and northern Indiana. Although the line is operated by NICTD, an agency separate from Metra, the line runs along the Metra Electric Line north of Kensington/115th Street station.
SEPTA Regional Rail, serving southeast Pennsylvania (Philadelphia), as well as Wilmington, Delaware, and Trenton, New Jersey. The network features a tunneled corridor through the city center and through-routed services from several commuter lines. The arrangement of services through the corridor was originally proposed by Vukan Vuchic and Shinya Kikuchi in 1984 and 1985.
MBTA Commuter Rail, serving Massachusetts (Boston, Worcester, Lowell) and Providence, Rhode Island
Caltrain, serving California (San Francisco, San Jose, and the San Francisco Peninsula)
Metrolink, serving California (Los Angeles, Burbank, Anaheim, San Bernardino, and Southern California)
Other commuter rail systems in the United States (not in ridership order) are:
CTRail, serving Connecticut (Hartford, New Haven and New London)
Utah Transit Authority FrontRunner, serving Utah (Wasatch Front)
North County Transit District Coaster, serving California (San Diego County)
Maryland Area Regional Commuter, serving Maryland (Baltimore) and Washington, D.C.
Virginia Railway Express, serving suburbs of Northern Virginia and Washington, D.C.
Sounder commuter rail, serving Washington (Seattle / Tacoma)
Tri-Rail, serving Florida (Miami / Fort Lauderdale / West Palm Beach)
Trinity Railway Express, serving Texas (Dallas / Fort Worth)
Westside Express Service, serving Oregon (Beaverton / Wilsonville)
Altamont Corridor Express, serving California (San Jose / Stockton)
SunRail, serving Florida (Orlando/Poinciana)
New Mexico Rail Runner Express, serving New Mexico (Albuquerque)
Northstar Line, serving Minnesota (Big Lake and downtown Minneapolis)
Capital MetroRail, serving Texas (Austin)
A-train, serving Texas (Denton County)
SMART, serving California (Sonoma and Marin counties)
WeGo Star, serving Nashville and Lebanon, Tennessee.
Denver's RTD four electrified commuter rail lines – the A, B, G and N Lines, run on segregated tracks. In its entirety the system combines elements of tram-train and commuter rail.
Canada
Exo commuter rail in Montreal
GO Transit in Toronto
West Coast Express in Vancouver
UP Express in Toronto
Mexico
Suburban Railway of the Valley of Mexico Metropolitan Area serving Mexico City
Toluca–Mexico City commuter rail serving Toluca and Mexico City
Central America
Rail Transport in Costa Rica serving San Jose
South America
Examples include an commuter system in the Buenos Aires metropolitan area, the long Supervia in Rio de Janeiro, the Metrotrén in Santiago, Chile, and the Valparaíso Metro in Valparaíso, Chile.
Another example is Companhia Paulista de Trens Metropolitanos (CPTM) in Greater São Paulo, Brazil. CPTM has 94 stations with seven lines, numbered starting on 7 (the lines 1 to 6 and the line 15 belong to the São Paulo Metro), with a total length of . Trains operates at high frequencies on tracks used exclusively for commuter traffic. In Rio de Janeiro SuperVia provides electrified commuter rail services.
Oceania
The five major cities in Australia have suburban railway systems in their metropolitan areas. These networks have frequent services, with frequencies varying from every 10 to every 30 minutes on most suburban lines, and up to 3–5 minutes in peak on bundled underground lines in the city centres of Sydney, Brisbane, Perth and Melbourne. The networks in each state developed from mainline railways and have never been completely operationally separate from long distance and freight traffic, unlike metro systems. The suburban networks are almost completely electrified.
The main suburban rail networks in Australia are:
The Sydney Trains suburban rail network consists of nine lines converging in the underground City Circle with frequencies as high as three minutes in this section, 5–10 minutes at most major stations all day and 15 minutes at most minor stations all day.
The Sydney rail network operated by Sydney Trains in Sydney (with connected suburban services in Newcastle and Wollongong run by its counterpart intercity operator, NSW TrainLink).
Melbourne's rail network features sixteen electrified commuter rail lines traversing the city centre in the underground City Loop providing a metro-like service in the central core. A second underground core is under construction, as the Metro Tunnel project. V/Line operates some commuter services between Melbourne and surrounding towns, as well as between Melbourne and some locations within the Melbourne metropolitan area.
Commuter rail services in Brisbane are provided under the Queensland Rail City network brand, featuring twelve electrified lines converging in the city centre. Cross River Rail is an under construction underground cross-city tunnel to relieve pressure on this network.
Railways in Perth fall under the Transperth network, which are operated by the Public Transport Authority
The Adelaide rail network operated by Adelaide Metro in Adelaide.
New Zealand has two frequent suburban rail services comparable to those in Australia: the Auckland rail network is operated by Auckland One Rail and the Wellington rail network is operated by Transdev Wellington.
Hybrid systems
Hybrid urban-suburban rail systems exhibiting characteristics of both rapid transit and commuter rail serving a metropolitan region are common in German-speaking countries, where they are known as S-Bahn. Other examples include: Lazio regional railways in Rome, the RER in France and the Elizabeth line, London Underground Metropolitan line, London Overground and Merseyrail in the UK. Comparable systems can be found in Australia such as Sydney Trains and Metro Trains Melbourne, and in Japan with many urban and suburban lines operated by JR East/West and third-party companies running at metro-style frequencies. In contrast, comparable systems of this type are generally rare in the United States and Canada, where peak hour frequencies are more common.
In Asia, the construction of higher speed urban-suburban rail links has gained traction in various countries, such as in India, with the Delhi RRTS, in China, with the Pearl River Delta Metropolitan Region intercity railway, and in South Korea, with the Great Train eXpress system. These systems usually run on dedicated elevated or underground tracks for most of their route and have features comparable to Higher-speed rail.
| Technology | Rail and cable transport | null |
6292 | https://en.wikipedia.org/wiki/Convex%20set | Convex set | In geometry, a set of points is convex if it contains every line segment between two points in the set. Equivalently, a convex set or a convex region is a set that intersects every line in a line segment, single point, or the empty set.
For example, a solid cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is not convex.
The boundary of a convex set in the plane is always a convex curve. The intersection of all the convex sets that contain a given subset of Euclidean space is called the convex hull of . It is the smallest convex set containing .
A convex function is a real-valued function defined on an interval with the property that its epigraph (the set of points on or above the graph of the function) is a convex set. Convex minimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets. The branch of mathematics devoted to the study of properties of convex sets and convex functions is called convex analysis.
Spaces in which convex sets are defined include the Euclidean spaces, the affine spaces over the real numbers, and certain non-Euclidean geometries. The notion of a convex set in Euclidean spaces can be generalized in several ways by modifying its definition, for instance by restricting the line segments that such a set is required to contain.
Definitions
Let be a vector space or an affine space over the real numbers, or, more generally, over some ordered field (this includes Euclidean spaces, which are affine spaces). A subset of is convex if, for all and in , the line segment connecting and is included in .
This means that the affine combination belongs to for all in and in the interval . This implies that convexity is invariant under affine transformations. Further, it implies that a convex set in a real or complex topological vector space is path-connected (and therefore also connected).
A set is if every point on the line segment connecting and other than the endpoints is inside the topological interior of . A closed convex subset is strictly convex if and only if every one of its boundary points is an extreme point.
A set is absolutely convex if it is convex and balanced.
Examples
The convex subsets of (the set of real numbers) are the intervals and the points of . Some examples of convex subsets of the Euclidean plane are solid regular polygons, solid triangles, and intersections of solid triangles. Some examples of convex subsets of a Euclidean 3-dimensional space are the Archimedean solids and the Platonic solids. The Kepler-Poinsot polyhedra are examples of non-convex sets.
Non-convex set
A set that is not convex is called a non-convex set. A polygon that is not a convex polygon is sometimes called a concave polygon, and some sources more generally use the term concave set to mean a non-convex set, but most authorities prohibit this usage.
The complement of a convex set, such as the epigraph of a concave function, is sometimes called a reverse convex set, especially in the context of mathematical optimization.
Properties
Given points in a convex set , and
nonnegative numbers such that , the affine combination
belongs to . As the definition of a convex set is the case , this property characterizes convex sets.
Such an affine combination is called a convex combination of .
Intersections and unions
The collection of convex subsets of a vector space, an affine space, or a Euclidean space has the following properties:
The empty set and the whole space are convex.
The intersection of any collection of convex sets is convex.
The union of a sequence of convex sets is convex, if they form a non-decreasing chain for inclusion. For this property, the restriction to chains is important, as the union of two convex sets need not be convex.
Closed convex sets
Closed convex sets are convex sets that contain all their limit points. They can be characterised as the intersections of closed half-spaces (sets of points in space that lie on and to one side of a hyperplane).
From what has just been said, it is clear that such intersections are convex, and they will also be closed sets. To prove the converse, i.e., every closed convex set may be represented as such intersection, one needs the supporting hyperplane theorem in the form that for a given closed convex set and point outside it, there is a closed half-space that contains and not . The supporting hyperplane theorem is a special case of the Hahn–Banach theorem of functional analysis.
Convex sets and rectangles
Let be a convex body in the plane (a convex set whose interior is non-empty). We can inscribe a rectangle r in such that a homothetic copy R of r is circumscribed about . The positive homothety ratio is at most 2 and:
Blaschke-Santaló diagrams
The set of all planar convex bodies can be parameterized in terms of the convex body diameter D, its inradius r (the biggest circle contained in the convex body) and its circumradius R (the smallest circle containing the convex body). In fact, this set can be described by the set of inequalities given by
and can be visualized as the image of the function g that maps a convex body to the point given by (r/R, D/2R). The image of this function is known a (r, D, R) Blachke-Santaló diagram.
Alternatively, the set can also be parametrized by its width (the smallest distance between any two different parallel support hyperplanes), perimeter and area.
Other properties
Let X be a topological vector space and be convex.
and are both convex (i.e. the closure and interior of convex sets are convex).
If and then (where ).
If then:
, and
, where is the algebraic interior of C.
Convex hulls and Minkowski sums
Convex hulls
Every subset of the vector space is contained within a smallest convex set (called the convex hull of ), namely the intersection of all convex sets containing . The convex-hull operator Conv() has the characteristic properties of a hull operator:
extensive: ,
non-decreasing: implies that , and
idempotent: .
The convex-hull operation is needed for the set of convex sets to form a lattice, in which the "join" operation is the convex hull of the union of two convex sets
The intersection of any collection of convex sets is itself convex, so the convex subsets of a (real or complex) vector space form a complete lattice.
Minkowski addition
In a real vector-space, the Minkowski sum of two (non-empty) sets, and , is defined to be the set formed by the addition of vectors element-wise from the summand-sets
More generally, the Minkowski sum of a finite family of (non-empty) sets is the set formed by element-wise addition of vectors
For Minkowski addition, the zero set containing only the zero vector has special importance: For every non-empty subset S of a vector space
in algebraic terminology, is the identity element of Minkowski addition (on the collection of non-empty sets).
Convex hulls of Minkowski sums
Minkowski addition behaves well with respect to the operation of taking convex hulls, as shown by the following proposition:
Let be subsets of a real vector-space, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls
This result holds more generally for each finite collection of non-empty sets:
In mathematical terminology, the operations of Minkowski summation and of forming convex hulls are commuting operations.
Minkowski sums of convex sets
The Minkowski sum of two compact convex sets is compact. The sum of a compact convex set and a closed convex set is closed.
The following famous theorem, proved by Dieudonné in 1966, gives a sufficient condition for the difference of two closed convex subsets to be closed. It uses the concept of a recession cone of a non-empty convex subset S, defined as:
where this set is a convex cone containing and satisfying . Note that if S is closed and convex then is closed and for all ,
Theorem (Dieudonné). Let A and B be non-empty, closed, and convex subsets of a locally convex topological vector space such that is a linear subspace. If A or B is locally compact then A − B is closed.
Generalizations and extensions for convexity
The notion of convexity in the Euclidean space may be generalized by modifying the definition in some or other aspects. The common name "generalized convexity" is used, because the resulting objects retain certain properties of convex sets.
Star-convex (star-shaped) sets
Let be a set in a real or complex vector space. is star convex (star-shaped) if there exists an in such that the line segment from to any point in is contained in . Hence a non-empty convex set is always star-convex but a star-convex set is not always convex.
Orthogonal convexity
An example of generalized convexity is orthogonal convexity.
A set in the Euclidean space is called orthogonally convex or ortho-convex, if any segment parallel to any of the coordinate axes connecting two points of lies totally within . It is easy to prove that an intersection of any collection of orthoconvex sets is orthoconvex. Some other properties of convex sets are valid as well.
Non-Euclidean geometry
The definition of a convex set and a convex hull extends naturally to geometries which are not Euclidean by defining a geodesically convex set to be one that contains the geodesics joining any two points in the set.
Order topology
Convexity can be extended for a totally ordered set endowed with the order topology.
Let . The subspace is a convex set if for each pair of points in such that , the interval is contained in . That is, is convex if and only if for all in , implies .
A convex set is connected in general: a counter-example is given by the subspace {1,2,3} in , which is both convex and not connected.
Convexity spaces
The notion of convexity may be generalised to other objects, if certain properties of convexity are selected as axioms.
Given a set , a convexity over is a collection of subsets of satisfying the following axioms:
The empty set and are in
The intersection of any collection from is in .
The union of a chain (with respect to the inclusion relation) of elements of is in .
The elements of are called convex sets and the pair is called a convexity space. For the ordinary convexity, the first two axioms hold, and the third one is trivial.
For an alternative definition of abstract convexity, more suited to discrete geometry, see the convex geometries associated with antimatroids.
Convex spaces
Convexity can be generalised as an abstract algebraic structure: a space is convex if it is possible to take convex combinations of points.
| Mathematics | Geometry | null |
6295 | https://en.wikipedia.org/wiki/Chaos%20theory | Chaos theory | Chaos theory (or chaology) is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
Introduction
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
Chaotic dynamics
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
Sensitivity to initial conditions
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by
where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
Non-periodicity
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
Topological mixing
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
Topological transitivity
A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
Density of periodic orbits
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example, → → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
Strange attractors
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
Coexisting attractors
In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
Minimum complexity of a chaotic system
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
Infinite dimensional maps
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
,
where kernel is propagator derived as Green function of a relevant physical system,
might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map
may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.
.
Jerk systems
In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form
are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems.
A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits.
One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is:
Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes:
In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.
Similar circuits only require one diode or no diodes at all.
| Mathematics | Other | null |
6312 | https://en.wikipedia.org/wiki/Cell%20wall | Cell wall | A cell wall is a structural layer that surrounds some cell types, found immediately outside the cell membrane. It can be tough, flexible, and sometimes rigid. Primarily, it provides the cell with structural support, shape, protection, and functions as a selective barrier. Another vital role of the cell wall is to help the cell withstand osmotic pressure and mechanical stress. While absent in many eukaryotes, including animals, cell walls are prevalent in other organisms such as fungi, algae and plants, and are commonly found in most prokaryotes, with the exception of mollicute bacteria.
The composition of cell walls varies across taxonomic groups, species, cell type, and the cell cycle. In land plants, the primary cell wall comprises polysaccharides like cellulose, hemicelluloses, and pectin. Often, other polymers such as lignin, suberin or cutin are anchored to or embedded in plant cell walls. Algae exhibit cell walls composed of glycoproteins and polysaccharides, such as carrageenan and agar, distinct from those in land plants. Bacterial cell walls contain peptidoglycan, while archaeal cell walls vary in composition, potentially consisting of glycoprotein S-layers, pseudopeptidoglycan, or polysaccharides. Fungi possess cell walls constructed from the polymer chitin, specifically N-acetylglucosamine. diatoms have a unique cell wall composed of biogenic silica.
History
A plant cell wall was first observed and named (simply as a "wall") by Robert Hooke in 1665. However, "the dead excrusion product of the living protoplast" was forgotten, for almost three centuries, being the subject of scientific interest mainly as a resource for industrial processing or in relation to animal or human health.
In 1804, Karl Rudolphi and J.H.F. Link proved that cells had independent cell walls. Before, it had been thought that cells shared walls and that fluid passed between them this way.
The mode of formation of the cell wall was controversial in the 19th century. Hugo von Mohl (1853, 1858) advocated the idea that the cell wall grows by apposition. Carl Nägeli (1858, 1862, 1863) believed that the growth of the wall in thickness and in area was due to a process termed intussusception. Each theory was improved in the following decades: the apposition (or lamination) theory by Eduard Strasburger (1882, 1889), and the intussusception theory by Julius Wiesner (1886).
In 1930, Ernst Münch coined the term apoplast in order to separate the "living" symplast from the "dead" plant region, the latter of which included the cell wall.
By the 1980s, some authors suggested replacing the term "cell wall", particularly as it was used for plants, with the more precise term "extracellular matrix", as used for animal cells, but others preferred the older term.
Properties
Cell walls serve similar purposes in those organisms that possess them. They may give cells rigidity and strength, offering protection against mechanical stress. The chemical composition and mechanical properties of the cell wall are linked with plant cell growth and morphogenesis. In multicellular organisms, they permit the organism to build and hold a definite shape. Cell walls also limit the entry of large molecules that may be toxic to the cell. They further permit the creation of stable osmotic environments by preventing osmotic lysis and helping to retain water. Their composition, properties, and form may change during the cell cycle and depend on growth conditions.
Rigidity of cell walls
In most cells, the cell wall is flexible, meaning that it will bend rather than holding a fixed shape, but has considerable tensile strength. The apparent rigidity of primary plant tissues is enabled by cell walls, but is not due to the walls' stiffness. Hydraulic turgor pressure creates this rigidity, along with the wall structure. The flexibility of the cell walls is seen when plants wilt, so that the stems and leaves begin to droop, or in seaweeds that bend in water currents. As John Howland explains
The apparent rigidity of the cell wall thus results from inflation of the cell contained within. This inflation is a result of the passive uptake of water.
In plants, a secondary cell wall is a thicker additional layer of cellulose which increases wall rigidity. Additional layers may be formed by lignin in xylem cell walls, or suberin in cork cell walls. These compounds are rigid and waterproof, making the secondary wall stiff. Both wood and bark cells of trees have secondary walls. Other parts of plants such as the leaf stalk may acquire similar reinforcement to resist the strain of physical forces.
Permeability
The primary cell wall of most plant cells is freely permeable to small molecules including small proteins, with size exclusion estimated to be 30-60 kDa. The pH is an important factor governing the transport of molecules through cell walls.
Evolution
Cell walls evolved independently in many groups.
The photosynthetic eukaryotes (so-called plant and algae) is one group with cellulose cell walls, where the cell wall is closely related to the evolution of multicellularity, terrestrialization and vascularization. The CesA cellulose synthase evolved in Cyanobacteria and was part of Archaeplastida since endosymbiosis; secondary endosymbiosis events transferred it (with the arabinogalactan proteins) further into brown algae and oomycetes. Plants later evolved various genes from CesA, including the Csl (cellulose synthase-like) family of proteins and additional Ces proteins. Combined with the various glycosyltransferases (GT), they enable more complex chemical structures to be built.
Fungi use a chitin-glucan-protein cell wall. They share the 1,3-β-glucan synthesis pathway with plants, using homologous GT48 family 1,3-Beta-glucan synthases to perform the task, suggesting that such an enzyme is very ancient within the eukaryotes. Their glycoproteins are rich in mannose. The cell wall might have evolved to deter viral infections. Proteins embedded in cell walls are variable, contained in tandem repeats subject to homologous recombination. An alternative scenario is that fungi started with a chitin-based cell wall and later acquired the GT-48 enzymes for the 1,3-β-glucans via horizontal gene transfer. The pathway leading to 1,6-β-glucan synthesis is not sufficiently known in either case.
Plant cell walls
The walls of plant cells must have sufficient tensile strength to withstand internal osmotic pressures of several times atmospheric pressure that result from the difference in solute concentration between the cell interior and external solutions. Plant cell walls vary from 0.1 to several μm in thickness.
Layers
Up to three strata or layers may be found in plant cell walls:
The primary cell wall, generally a thin, flexible and extensible layer formed while the cell is growing.
The secondary cell wall, a thick layer formed inside the primary cell wall after the cell is fully grown. It is not found in all cell types. Some cells, such as the conducting cells in xylem, possess a secondary wall containing lignin, which strengthens and waterproofs the wall.
The middle lamella, a layer rich in pectins. This outermost layer forms the interface between adjacent plant cells and glues them together.
Composition
In the primary (growing) plant cell wall, the major carbohydrates are cellulose, hemicellulose and pectin. The cellulose microfibrils are linked via hemicellulosic tethers to form the cellulose-hemicellulose network, which is embedded in the pectin matrix. The most common hemicellulose in the primary cell wall is xyloglucan. In grass cell walls, xyloglucan and pectin are reduced in abundance and partially replaced by glucuronoarabinoxylan, another type of hemicellulose. Primary cell walls characteristically extend (grow) by a mechanism called acid growth, mediated by expansins, extracellular proteins activated by acidic conditions that modify the hydrogen bonds between pectin and cellulose. This functions to increase cell wall extensibility. The outer part of the primary cell wall of the plant epidermis is usually impregnated with cutin and wax, forming a permeability barrier known as the plant cuticle.
Secondary cell walls contain a wide range of additional compounds that modify their mechanical properties and permeability. The major polymers that make up wood (largely secondary cell walls) include:
cellulose, 35-50%
xylan, 20-35%, a type of hemicellulose
lignin, 10-25%, a complex phenolic polymer that penetrates the spaces in the cell wall between cellulose, hemicellulose and pectin components, driving out water and strengthening the wall.
Additionally, structural proteins (1-5%) are found in most plant cell walls; they are classified as hydroxyproline-rich glycoproteins (HRGP), arabinogalactan proteins (AGP), glycine-rich proteins (GRPs), and proline-rich proteins (PRPs). Each class of glycoprotein is defined by a characteristic, highly repetitive protein sequence. Most are glycosylated, contain hydroxyproline (Hyp) and become cross-linked in the cell wall. These proteins are often concentrated in specialized cells and in cell corners. Cell walls of the epidermis may contain cutin. The Casparian strip in the endodermis roots and cork cells of plant bark contain suberin. Both cutin and suberin are polyesters that function as permeability barriers to the movement of water. The relative composition of carbohydrates, secondary compounds and proteins varies between plants and between the cell type and age. Plant cells walls also contain numerous enzymes, such as hydrolases, esterases, peroxidases, and transglycosylases, that cut, trim and cross-link wall polymers.
Secondary walls - especially in grasses - may also contain microscopic silica crystals, which may strengthen the wall and protect it from herbivores.
Cell walls in some plant tissues also function as storage deposits for carbohydrates that can be broken down and resorbed to supply the metabolic and growth needs of the plant. For example, endosperm cell walls in the seeds of cereal grasses, nasturtium
and other species, are rich in glucans and other polysaccharides that are readily digested by enzymes during seed germination to form simple sugars that nourish the growing embryo.
Formation
The middle lamella is laid down first, formed from the cell plate during cytokinesis, and the primary cell wall is then deposited inside the middle lamella. The actual structure of the cell wall is not clearly defined and several models exist - the covalently linked cross model, the tether model, the diffuse layer model and the stratified layer model. However, the primary cell wall, can be defined as composed of cellulose microfibrils aligned at all angles. Cellulose microfibrils are produced at the plasma membrane by the cellulose synthase complex, which is proposed to be made of a hexameric rosette that contains three cellulose synthase catalytic subunits for each of the six units. Microfibrils are held together by hydrogen bonds to provide a high tensile strength. The cells are held together and share the gelatinous membrane (the middle lamella), which contains magnesium and calcium pectates (salts of pectic acid). Cells interact though plasmodesmata, which are inter-connecting channels of cytoplasm that connect to the protoplasts of adjacent cells across the cell wall.
In some plants and cell types, after a maximum size or point in development has been reached, a secondary wall is constructed between the plasma membrane and primary wall. Unlike the primary wall, the cellulose microfibrils are aligned parallel in layers, the orientation changing slightly with each additional layer so that the structure becomes helicoidal. Cells with secondary cell walls can be rigid, as in the gritty sclereid cells in pear and quince fruit. Cell to cell communication is possible through pits in the secondary cell wall that allow plasmodesmata to connect cells through the secondary cell walls.
Fungal cell walls
There are several groups of organisms that have been called "fungi". Some of these groups (Oomycete and Myxogastria) have been transferred out of the Kingdom Fungi, in part because of fundamental biochemical differences in the composition of the cell wall. Most true fungi have a cell wall consisting largely of chitin and other polysaccharides. True fungi do not have cellulose in their cell walls.
In fungi, the cell wall is the outer-most layer, external to the plasma membrane. The fungal cell wall is a matrix of three main components:
chitin: polymers consisting mainly of unbranched chains of β-(1,4)-linked-N-Acetylglucosamine in the Ascomycota and Basidiomycota, or poly-β-(1,4)-linked-N-Acetylglucosamine (chitosan) in the Zygomycota. Both chitin and chitosan are synthesized and extruded at the plasma membrane.
glucans: glucose polymers that function to cross-link chitin or chitosan polymers. β-glucans are glucose molecules linked via β-(1,3)- or β-(1,6)- bonds and provide rigidity to the cell wall while α-glucans are defined by α-(1,3)- and/or α-(1,4) bonds and function as part of the matrix.
proteins: enzymes necessary for cell wall synthesis and lysis in addition to structural proteins are all present in the cell wall. Most of the structural proteins found in the cell wall are glycosylated and contain mannose, thus these proteins are called mannoproteins or mannans.
Other eukaryotic cell walls
Algae
Like plants, algae have cell walls. Algal cell walls contain either polysaccharides (such as cellulose (a glucan)) or a variety of glycoproteins (Volvocales) or both. The inclusion of additional polysaccharides in algal cells walls is used as a feature for algal taxonomy.
Mannans: They form microfibrils in the cell walls of a number of marine green algae including those from the genera, Codium, Dasycladus, and Acetabularia as well as in the walls of some red algae, like Porphyra and Bangia.
Xylans:
Alginic acid: It is a common polysaccharide in the cell walls of brown algae.
Sulfonated polysaccharides: They occur in the cell walls of most algae; those common in red algae include agarose, carrageenan, porphyran, furcelleran and funoran.
Other compounds that may accumulate in algal cell walls include sporopollenin and calcium ions.
The group of algae known as the diatoms synthesize their cell walls (also known as frustules or valves) from silicic acid. Significantly, relative to the organic cell walls produced by other groups, silica frustules require less energy to synthesize (approximately 8%), potentially a major saving on the overall cell energy budget and possibly an explanation for higher growth rates in diatoms.
In brown algae, phlorotannins may be a constituent of the cell walls.
Water molds
The group Oomycetes, also known as water molds, are saprotrophic plant pathogens like fungi. Until recently they were widely believed to be fungi, but structural and molecular evidence has led to their reclassification as heterokonts, related to autotrophic brown algae and diatoms. Unlike fungi, oomycetes typically possess cell walls of cellulose and glucans rather than chitin, although some genera (such as Achlya and Saprolegnia) do have chitin in their walls. The fraction of cellulose in the walls is no more than 4 to 20%, far less than the fraction of glucans. Oomycete cell walls also contain the amino acid hydroxyproline, which is not found in fungal cell walls.
Slime molds
The dictyostelids are another group formerly classified among the fungi. They are slime molds that feed as unicellular amoebae, but aggregate into a reproductive stalk and sporangium under certain conditions. Cells of the reproductive stalk, as well as the spores formed at the apex, possess a cellulose wall. The spore wall has three layers, the middle one composed primarily of cellulose, while the innermost is sensitive to cellulase and pronase.
Prokaryotic cell walls
Bacterial cell walls
Around the outside of the cell membrane is the bacterial cell wall. Bacterial cell walls are made of peptidoglycan (also called murein), which is made from polysaccharide chains cross-linked by unusual peptides containing D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi which are made of cellulose and chitin, respectively. The cell wall of bacteria is also distinct from that of Archaea, which do not contain peptidoglycan. The cell wall is essential to the survival of many bacteria, although L-form bacteria can be produced in the laboratory that lack a cell wall. The antibiotic penicillin is able to kill bacteria by preventing the cross-linking of peptidoglycan and this causes the cell wall to weaken and lyse. The lysozyme enzyme can also damage bacterial cell walls.
There are broadly speaking two different types of cell wall in bacteria, called gram-positive and gram-negative. The names originate from the reaction of cells to the Gram stain, a test long-employed for the classification of bacterial species.
Gram-positive bacteria possess a thick cell wall containing many layers of peptidoglycan and teichoic acids.
Gram-negative bacteria have a relatively thin cell wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins. Most bacteria have the gram-negative cell wall and only the Bacillota and Actinomycetota (previously known as the low G+C and high G+C gram-positive bacteria, respectively) have the alternative gram-positive arrangement.
These differences in structure produce differences in antibiotic susceptibility. The beta-lactam antibiotics (e.g. penicillin, cephalosporin) only work against gram-negative pathogens, such as Haemophilus influenzae or Pseudomonas aeruginosa. The glycopeptide antibiotics (e.g. vancomycin, teicoplanin, telavancin) only work against gram-positive pathogens such as Staphylococcus aureus
Archaeal cell walls
Although not truly unique, the cell walls of Archaea are unusual. Whereas peptidoglycan is a standard component of all bacterial cell walls, all archaeal cell walls lack peptidoglycan, though some methanogens have a cell wall made of a similar polymer called pseudopeptidoglycan. There are four types of cell wall currently known among the Archaea.
One type of archaeal cell wall is that composed of pseudopeptidoglycan (also called pseudomurein). This type of wall is found in some methanogens, such as Methanobacterium and Methanothermus. While the overall structure of archaeal pseudopeptidoglycan superficially resembles that of bacterial peptidoglycan, there are a number of significant chemical differences. Like the peptidoglycan found in bacterial cell walls, pseudopeptidoglycan consists of polymer chains of glycan cross-linked by short peptide connections. However, unlike peptidoglycan, the sugar N-acetylmuramic acid is replaced by N-acetyltalosaminuronic acid, and the two sugars are bonded with a β,1-3 glycosidic linkage instead of β,1-4. Additionally, the cross-linking peptides are L-amino acids rather than D-amino acids as they are in bacteria.
A second type of archaeal cell wall is found in Methanosarcina and Halococcus. This type of cell wall is composed entirely of a thick layer of polysaccharides, which may be sulfated in the case of Halococcus. Structure in this type of wall is complex and not fully investigated.
A third type of wall among the Archaea consists of glycoprotein, and occurs in the hyperthermophiles, Halobacterium, and some methanogens. In Halobacterium, the proteins in the wall have a high content of acidic amino acids, giving the wall an overall negative charge. The result is an unstable structure that is stabilized by the presence of large quantities of positive sodium ions that neutralize the charge. Consequently, Halobacterium thrives only under conditions with high salinity.
In other Archaea, such as Methanomicrobium and Desulfurococcus, the wall may be composed only of surface-layer proteins, known as an S-layer. S-layers are common in bacteria, where they serve as either the sole cell-wall component or an outer layer in conjunction with polysaccharides. Most Archaea are Gram-negative, though at least one Gram-positive member is known.
Other cell coverings
Many protists and bacteria produce other cell surface structures apart from cell walls, external (extracellular matrix) or internal. Many algae have a sheath or envelope of mucilage outside the cell made of exopolysaccharides. Diatoms build a frustule from silica extracted from the surrounding water; radiolarians, foraminiferans, testate amoebae and silicoflagellates also produce a skeleton from minerals, called test in some groups. Many green algae, such as Halimeda and the Dasycladales, and some red algae, the Corallinales, encase their cells in a secreted skeleton of calcium carbonate. In each case, the wall is rigid and essentially inorganic. It is the non-living component of cell. Some golden algae, ciliates and choanoflagellates produces a shell-like protective outer covering called lorica. Some dinoflagellates have a theca of cellulose plates, and coccolithophorids have coccoliths.
An extracellular matrix (ECM) is also present in metazoans. Its composition varies between cells, but collagens are the most abundant protein in the ECM.
| Biology and health sciences | Plant cells | null |
6329 | https://en.wikipedia.org/wiki/Chromatography | Chromatography | In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the mobile phase, which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the stationary phase is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation.
Chromatography may be preparative or analytical. The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.
Etymology and pronunciation
Chromatography, pronounced , is derived from Greek χρῶμα chrōma, which means "color", and γράφειν gráphein, which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments.
History
The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term chromatography in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes.
Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules.
Terms
Analyte – the substance to be separated during chromatography. It is also normally what is needed from the mixture.
Analytical chromatography – the use of chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample.
Bonded phase – a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing.
Chromatogram – the visual output of the chromatograph. In the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to the concentration of the specific analyte separated.
Chromatograph – an instrument that enables a sophisticated separation, e.g. gas chromatographic or liquid chromatographic separation.
Chromatography – a physical method of separation that distributes components to separate between two phases, one stationary (stationary phase), the other (the mobile phase) moving in a definite direction.
Eluent (sometimes spelled eluant) – the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase.
Eluate – the mixture of solute (see Eluite) and solvent (see Eluent) exiting the column.
Effluent – the stream flowing out of a chromatographic column. In practise, it is used synonymously with eluate, but the term more precisely refers to the stream independent of separation taking place.
Eluite – a more precise term for solute or analyte. It is a sample component leaving the chromatographic column.
Eluotropic series – a list of solvents ranked according to their eluting power.
Immobilized phase – a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing.
Mobile phase – the phase that moves in a definite direction. It may be a liquid (LC and capillary electrochromatography, CEC), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography, SFC). The mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column. In the case of HPLC the mobile phase consists of a non-polar solvent(s) such as hexane in normal phase or a polar solvent such as methanol in reverse phase chromatography and the sample being separated. The mobile phase moves through the chromatography column (the stationary phase) where the sample interacts with the stationary phase and is separated.
Preparative chromatography – the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis.
Retention time – the characteristic time it takes for a particular analyte to pass through the system (from the column inlet to the detector) under set conditions. | Physical sciences | Separation processes | null |
6335 | https://en.wikipedia.org/wiki/Cucurbitaceae | Cucurbitaceae | The Cucurbitaceae (), also called cucurbits or the gourd family, are a plant family consisting of about 965 species in 101 genera. Those of most agricultural, commercial or nutritional value to humans include:
Cucurbita – squash, pumpkin, zucchini (courgette), some gourds.
Lagenaria – calabash (bottle gourd) and other, ornamental gourds.
Citrullus – watermelon (C. lanatus, C. colocynthis), plus several other species.
Cucumis – cucumber (C. sativus); various melons and vines.
Momordica – bitter melon.
Luffa – commonly called 'luffa' or ‘luffa squash'; sometimes spelled loofah. Young fruits may be cooked; when fully ripened, they become fibrous and unpalatable, thus becoming the source of the loofah scrubbing sponge.
Cyclanthera – Caigua.
Gerrardanthus — the species G. macrorhizus has gained some popularity as an ornamental caudiciform plant.
Xerosicyos — the silver dollar vine (Xerosicyos danguyi) is popular amongst horticulturists and plant collectors.
The plants in this family are grown around the tropics and in temperate areas of the world, where those with edible fruits were among the earliest cultivated plants in both the Old and New Worlds. The family Cucurbitaceae ranks among the highest of plant families for number and percentage of species used as human food. The name Cucurbitaceae comes to international scientific vocabulary from Neo-Latin, from Cucurbita, the type genus, + -aceae, a standardized suffix for plant family names in modern taxonomy. The genus name comes from the Classical Latin word , meaning "gourd".
Description
Most of the plants in this family are annual vines, but some are woody lianas, thorny shrubs, or trees (Dendrosicyos). Many species have large, yellow or white flowers. The stems are hairy and pentangular. Tendrils are present at 90° to the leaf petioles at nodes. Leaves are exstipulate, alternate, simple palmately lobed or palmately compound. The flowers are unisexual, with male and female flowers on different plants (dioecious) or on the same plant (monoecious). The female flowers have inferior ovaries. The fruit is often a kind of modified berry called a pepo.
Fossil history
One of the oldest fossil cucurbits so far is †Cucurbitaciphyllum lobatum from the Paleocene epoch, found at Shirley Canal, Montana. It was described for the first time in 1924 by the paleobotanist Frank Hall Knowlton. The fossil leaf is palmate, trilobed with rounded lobal sinuses and an entire or serrate margin. It has a leaf pattern similar to the members of the genera Kedrostis, Melothria and Zehneria.
Classification
Tribal classification
The most recent classification of Cucurbitaceae delineates 15 tribes:
Tribe Gomphogyneae Benth. & Hook.f.
Alsomitra (Blume) Spach (1 sp.)
Bayabusua (1 sp.)
Gomphogyne Griff. (2 spp.)
Gynostemma Blume (10 spp.)
Hemsleya Cogn. ex F.B.Forbes & Hemsl. (30 spp.)
Neoalsomitra Hutch. (12 spp.)
Tribe Triceratieae A.Rich.
Cyclantheropsis Harms (3 spp.)
Fevillea L. (8 spp.)
Pteropepon (Cogn.) Cogn. (5 spp.)
Sicydium Schltdl. (9 spp.)
Tribe Zanonieae Benth. & Hook.f.
Gerrardanthus Harvey in Hook.f. (3–5 spp.)
Siolmatra Baill. (1 sp.)
Xerosicyos Humbert (5 spp.)
Zanonia L. (1 sp.)
Tribe Actinostemmateae H.Schaef. & S.S.Renner
Actinostemma Griff. (3 spp.)
Bolbostemma (2 spp.)
Tribe Indofevilleeae H.Schaef. & S.S.Renner
Indofevillea Chatterjee (2 sp.)
Tribe Thladiantheae H.Schaef. & S.S.Renner
Baijiania A.M.Lu & J.Q.Li (30 spp.)
Sinobaijiania (5 spp.)
Thladiantha Bunge 1833 (5 spp.)
Tribe Siraitieae H. Schaef. & S.S. Renner
Siraitia Merr. (3–4 spp.)
Tribe Momordiceae H.Schaef. & S.S.Renner
Momordica L. (60 spp.)
Tribe Joliffieae Schrad.
Ampelosicyos Thouars (5 spp.)
Cogniauxia Baill. (2 spp.)
Telfairia Hook. (3 spp.)
Tribe Bryonieae Dumort.
Austrobryonia H.Schaef. (4 spp.)
Bryonia L. (10 spp.)
Ecballium A.Rich. (1 sp.)
Tribe Schizopeponeae C.Jeffrey
Herpetospermum Wall. ex Hook.f. (3 spp.)
Schizopepon Maxim. (6–8 spp.)
Tribe Sicyoeae Schrad.
Brandegea Cogn. (1 sp.)
Cyclanthera Schrad. (40 spp.)
Echinocystis Torr. & A.Gray (1 sp.)
Echinopepon Naudin (19 spp.)
Hanburia Seem. (7 spp.)
Hodgsonia Hook.f. & Thomson (2 spp.)
Linnaeosicyos H.Schaef. & Kocyan (1 sp.)
Luffa Mill. (5–7 spp.)
Marah Kellogg (7 spp.)
Microsechium (4 spp.)
Nothoalsomitra Hutch. (1 sp.)
Parasicyos (2 sp.)
Sechiopsis (5 spp.)
Sicyocaulis (1 sp.)
Sicyos L. (64 spp., including Frantzia Pittier and Sechium P.Browne)
Sicyosperma (1 sp.)
Trichosanthes L. (≤100 spp.)
Tribe Coniandreae Endl.
Apodanthera Arn. (16 spp.)
Bambekea Cogn. (1 sp.)
Ceratosanthes Adans. (4 spp.)
Corallocarpus Welw. ex Benth. & Hook.f. (17 spp.)
Cucurbitella Walp. (1 sp.)
Dendrosicyos Balf.f. (1 sp.)
Doyerea Grosourdy (1 sp.)
Eureiandra Hook.f. (8 spp.)
Gurania (Schltdl.) Cogn. (37 spp.)
Halosicyos Mart.Crov (1 sp.)
Helmontia Cogn. (2–4 spp.)
Ibervillea Greene (8 spp.)
Kedrostis Medik. (28 spp.)
Psiguria Neck. ex Arn. (6–12 spp.)
Seyrigia Keraudren (6 spp.)
Trochomeriopsis Cogn. (1 sp.)
Wilbrandia Silva Manso (5 spp.)
Tribe Benincaseae Ser.
Acanthosicyos Welw. ex Hook.f. (1 sp.)
Benincasa Savi (2 spp., including Praecitrullus Pangalo)
Blastania (3 spp., including Ctenolepis Hook.f.)
Borneosicyos (1–2 spp.)
Cephalopentandra Chiov. (1 sp.)
Citrullus Schrad. (4 spp.)
Coccinia Wight & Arn. (30 spp.)
Cucumis L. (65 spp.)
Dactyliandra Hook.f. (2 spp.)
Diplocyclos (Endl.) T.Post & Kuntze (4 spp.)
Indomelothria (2 spp.)
Khmeriosicyos (1 sp.)
Lagenaria Ser. (6 spp.)
Lemurosicyos Keraudren (1 sp.)
Melothria L. (12 spp., including M. scabra)
Muellerargia Cogn. (2 sp.)
Oreosyce (1 sp.)
Papuasicyos (8 spp.)
Peponium Engl. (20 spp.)
Raphidiocystis Hook.f. (5 spp.)
Ruthalicia C.Jeffrey (2 spp.)
Scopellaria W.J.de Wilde & Duyfjes (2 spp.)
Solena Lour. (3 spp.)
Trochomeria Hook.f. (8 spp.)
Zehneria Endl. (ca. 60 spp.)
Tribe Cucurbiteae Ser.
Abobra Naudin (1 sp.)
Calycophysum H.Karst. & Triana (5 spp.)
Cayaponia Silva Manso (74 spp.)
Cionosicys Griseb. (4–5 spp.)
Cucurbita L. (15 spp.)
Penelopeia Urb. (2 spp.)
Peponopsis Naudin (1 sp.)
Polyclathra Bertol. (6 spp.)
Schizocarpum Schrad. (11 spp.)
Selysia Cogn. (4 spp.)
Sicana Naudin (4 spp.)
Tecunumania Standl. & Steyerm. (1 sp.)
Systematics
Modern molecular phylogenetics suggest the following relationships:
Pests and diseases
Sweet potato whitefly is the vector of a number of cucurbit viruses that cause yellowing symptoms throughout the southern United States.
| Biology and health sciences | Cucurbitales | null |
6339 | https://en.wikipedia.org/wiki/Cell%20biology | Cell biology | Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of the structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry.
History
Cells were first seen in 17th-century Europe with the invention of the compound microscope. In 1665, Robert Hooke referred to the building blocks of all living organisms as "cells" (published in Micrographia) after looking at a piece of cork and observing a structure reminiscent of a monastic cell; however, the cells were dead. They gave no indication to the actual overall components of a cell. A few years later, in 1674, Anton Van Leeuwenhoek was the first to analyze live cells in his examination of algae. Many years later, in 1831, Robert Brown discovered the nucleus. All of this preceded the cell theory which states that all living things are made up of cells and that cells are organisms' functional and structural units. This was ultimately concluded by plant scientist Matthias Schleiden and animal scientist Theodor Schwann in 1838, who viewed live cells in plant and animal tissue, respectively. 19 years later, Rudolf Virchow further contributed to the cell theory, adding that all cells come from the division of pre-existing cells. Viruses are not considered in cell biology – they lack the characteristics of a living cell and instead are studied in the microbiology subclass of virology.
Techniques
Cell biology research looks at different ways to culture and manipulate cells outside of a living body to further research in human anatomy and physiology, and to derive medications. The techniques by which cells are studied have evolved. Due to advancements in microscopy, techniques and technology have allowed scientists to hold a better understanding of the structure and function of cells. Many techniques commonly used to study cell biology are listed below:
Cell culture: Utilizes rapidly growing cells on media which allows for a large amount of a specific cell type and an efficient way to study cells. Cell culture is one of the major tools used in cellular and molecular biology, providing excellent model systems for studying the normal physiology and biochemistry of cells (e.g., metabolic studies, aging), the effects of drugs and toxic compounds on the cells, and mutagenesis and carcinogenesis. It is also used in drug screening and development, and large scale manufacturing of biological compounds (e.g., vaccines, therapeutic proteins).
Fluorescence microscopy: Fluorescent markers such as GFP, are used to label a specific component of the cell. Afterwards, a certain light wavelength is used to excite the fluorescent marker which can then be visualized.
Phase-contrast microscopy: Uses the optical aspect of light to represent the solid, liquid, and gas-phase changes as brightness differences.
Confocal microscopy: Combines fluorescence microscopy with imaging by focusing light and snap shooting instances to form a 3-D image.
Transmission electron microscopy: Involves metal staining and the passing of electrons through the cells, which will be deflected upon interaction with metal. This ultimately forms an image of the components being studied.
Cytometry: The cells are placed in the machine which uses a beam to scatter the cells based on different aspects and can therefore separate them based on size and content. Cells may also be tagged with GFP-fluorescence and can be separated that way as well.
Cell fractionation: This process requires breaking up the cell using high temperature or sonification followed by centrifugation to separate the parts of the cell allowing for them to be studied separately.
Cell types
There are two fundamental classifications of cells: prokaryotic and eukaryotic. Prokaryotic cells are distinguished from eukaryotic cells by the absence of a cell nucleus or other membrane-bound organelle. Prokaryotic cells are much smaller than eukaryotic cells, making them the smallest form of life. Prokaryotic cells include Bacteria and Archaea, and lack an enclosed cell nucleus. Eukaryotic cells are found in plants, animals, fungi, and protists. They range from 10 to 100 μm in diameter, and their DNA is contained within a membrane-bound nucleus. Eukaryotes are organisms containing eukaryotic cells. The four eukaryotic kingdoms are Animalia, Plantae, Fungi, and Protista.
They both reproduce through binary fission. Bacteria, the most prominent type, have several different shapes, although most are spherical or rod-shaped. Bacteria can be classed as either gram-positive or gram-negative depending on the cell wall composition. Gram-positive bacteria have a thicker peptidoglycan layer than gram-negative bacteria. Bacterial structural features include a flagellum that helps the cell to move, ribosomes for the translation of RNA to protein, and a nucleoid that holds all the genetic material in a circular structure. There are many processes that occur in prokaryotic cells that allow them to survive. In prokaryotes, mRNA synthesis is initiated at a promoter sequence on the DNA template comprising two consensus sequences that recruit RNA polymerase. The prokaryotic polymerase consists of a core enzyme of four protein subunits and a σ protein that assists only with initiation. For instance, in a process termed conjugation, the fertility factor allows the bacteria to possess a pilus which allows it to transmit DNA to another bacteria which lacks the F factor, permitting the transmittance of resistance allowing it to survive in certain environments.
Structure and function
Structure of eukaryotic cells
Eukaryotic cells are composed of the following organelles:
Nucleus: The nucleus of the cell functions as the genome and genetic information storage for the cell, containing all the DNA organized in the form of chromosomes. It is surrounded by a nuclear envelope, which includes nuclear pores allowing for the transportation of proteins between the inside and outside of the nucleus. This is also the site for replication of DNA as well as transcription of DNA to RNA. Afterwards, the RNA is modified and transported out to the cytosol to be translated to protein.
Nucleolus: This structure is within the nucleus, usually dense and spherical. It is the site of ribosomal RNA (rRNA) synthesis, which is needed for ribosomal assembly.
Endoplasmic reticulum (ER): This functions to synthesize, store, and secrete proteins to the Golgi apparatus. Structurally, the endoplasmic reticulum is a network of membranes found throughout the cell and connected to the nucleus. The membranes are slightly different from cell to cell and a cell's function determines the size and structure of the ER.
Mitochondria: Commonly known as the powerhouse of the cell is a double membrane bound cell organelle. This functions for the production of energy or ATP within the cell. Specifically, this is the place where the Krebs cycle or TCA cycle for the production of NADH and FADH occurs. Afterwards, these products are used within the electron transport chain (ETC) and oxidative phosphorylation for the final production of ATP.
Golgi apparatus: This functions to further process, package, and secrete the proteins to their destination. The proteins contain a signal sequence that allows the Golgi apparatus to recognize and direct it to the correct place. Golgi apparatus also produce glycoproteins and glycolipids.
Lysosome: The lysosome functions to degrade material brought in from the outside of the cell or old organelles. This contains many acid hydrolases, proteases, nucleases, and lipases, which break down the various molecules. Autophagy is the process of degradation through lysosomes which occurs when a vesicle buds off from the ER and engulfs the material, then, attaches and fuses with the lysosome to allow the material to be degraded.
Ribosomes: Functions to translate RNA to protein. it serves as a site of protein synthesis.
Cytoskeleton: Cytoskeleton is a structure that helps to maintain the shape and general organization of the cytoplasm. It anchors organelles within the cells and makes up the structure and stability of the cell. The cytoskeleton is composed of three principal types of protein filaments: actin filaments, intermediate filaments, and microtubules, which are held together and linked to subcellular organelles and the plasma membrane by a variety of accessory proteins.
Cell membrane: The cell membrane can be described as a phospholipid bilayer and is also consisted of lipids and proteins. Because the inside of the bilayer is hydrophobic and in order for molecules to participate in reactions within the cell, they need to be able to cross this membrane layer to get into the cell via osmotic pressure, diffusion, concentration gradients, and membrane channels.
Centrioles: Function to produce spindle fibers which are used to separate chromosomes during cell division.
Eukaryotic cells may also be composed of the following molecular components:
Chromatin: This makes up chromosomes and is a mixture of DNA with various proteins.
Cilia: They help to propel substances and can also be used for sensory purposes.
Cell metabolism
Cell metabolism is necessary for the production of energy for the cell and therefore its survival and includes many pathways and also sustaining the main cell organelles such as the nucleus, the mitochondria, the cell membrane etc. For cellular respiration, once glucose is available, glycolysis occurs within the cytosol of the cell to produce pyruvate. Pyruvate undergoes decarboxylation using the multi-enzyme complex to form acetyl coA which can readily be used in the TCA cycle to produce NADH and FADH2. These products are involved in the electron transport chain to ultimately form a proton gradient across the inner mitochondrial membrane. This gradient can then drive the production of ATP and during oxidative phosphorylation. Metabolism in plant cells includes photosynthesis which is simply the exact opposite of respiration as it ultimately produces molecules of glucose.
Cell signaling
Cell signaling or cell communication is important for cell regulation and for cells to process information from the environment and respond accordingly. Signaling can occur through direct cell contact or endocrine, paracrine, and autocrine signaling. Direct cell-cell contact is when a receptor on a cell binds a molecule that is attached to the membrane of another cell. Endocrine signaling occurs through molecules secreted into the bloodstream. Paracrine signaling uses molecules diffusing between two cells to communicate. Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through:
Ion channels: Can be of different types such as voltage or ligand gated ion channels. They allow for the outflow and inflow of molecules and ions.
G-protein coupled receptor (GPCR): Is widely recognized to contain seven transmembrane domains. The ligand binds on the extracellular domain and once the ligand binds, this signals a guanine exchange factor to convert GDP to GTP and activate the G-α subunit. G-α can target other proteins such as adenyl cyclase or phospholipase C, which ultimately produce secondary messengers such as cAMP, Ip3, DAG, and calcium. These secondary messengers function to amplify signals and can target ion channels or other enzymes. One example for amplification of a signal is cAMP binding to and activating PKA by removing the regulatory subunits and releasing the catalytic subunit. The catalytic subunit has a nuclear localization sequence which prompts it to go into the nucleus and phosphorylate other proteins to either repress or activate gene activity.
Receptor tyrosine kinases: Bind growth factors, further promoting the tyrosine on the intracellular portion of the protein to cross phosphorylate. The phosphorylated tyrosine becomes a landing pad for proteins containing an SH2 domain allowing for the activation of Ras and the involvement of the MAP kinase pathway.
Growth and development
Eukaryotic cell cycle
Cells are the foundation of all organisms and are the fundamental units of life. The growth and development of cells are essential for the maintenance of the host and survival of the organism. For this process, the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, and cell death.
The cell cycle is divided into four distinct phases: G1, S, G2, and M. The G phase – which is the cell growth phase – makes up approximately 95% of the cycle. The proliferation of cells is instigated by progenitors. All cells start out in an identical form and can essentially become any type of cells. Cell signaling such as induction can influence nearby cells to determinate the type of cell it will become. Moreover, this allows cells of the same type to aggregate and form tissues, then organs, and ultimately systems. The G1, G2, and S phase (DNA replication, damage and repair) are considered to be the interphase portion of the cycle, while the M phase (mitosis) is the cell division portion of the cycle. Mitosis is composed of many stages which include, prophase, metaphase, anaphase, telophase, and cytokinesis, respectively. The ultimate result of mitosis is the formation of two identical daughter cells.
The cell cycle is regulated in cell cycle checkpoints, by a series of signaling factors and complexes such as cyclins, cyclin-dependent kinase, and p53. When the cell has completed its growth process and if it is found to be damaged or altered, it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it can cause to the organism's survival.
Cell mortality, cell lineage immortality
The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes, as well as in prokaryotes, provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination.
Cell cycle phases
The cell cycle is a four-stage process that a cell goes through as it develops and divides. It includes Gap 1 (G1), synthesis (S), Gap 2 (G2), and mitosis (M). The cell either restarts the cycle from G1 or leaves the cycle through G0 after completing the cycle. The cell can progress from G0 through terminal differentiation. Finally, the interphase refers to the phases of the cell cycle that occur between one mitosis and the next, and includes G1, S, and G2. Thus, the phases are:
G1 phase: the cell grows in size and its contents are replicated.
S phase: the cell replicates each of the 46 chromosomes.
G2 phase: in preparation for cell division, new organelles and proteins form.
M phase: cytokinesis occurs, resulting in two identical daughter cells.
G0 phase: the two cells enter a resting stage where they do their job without actively preparing to divide.
Pathology
The scientific branch that studies and diagnoses diseases on the cellular level is called cytopathology. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to the pathology branch of histopathology, which studies whole tissues. Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening test used to detect cervical cancer, and precancerous cervical lesions that may lead to cervical cancer.
Cell cycle checkpoints and DNA damage repair system
The cell cycle is composed of a number of well-ordered, consecutive stages that result in cellular division. The fact that cells do not begin the next stage until the last one is finished, is a significant element of cell cycle regulation. Cell cycle checkpoints are characteristics that constitute an excellent monitoring strategy for accurate cell cycle and divisions. Cdks, associated cyclin counterparts, protein kinases, and phosphatases regulate cell growth and division from one stage to another. The cell cycle is controlled by the temporal activation of Cdks, which is governed by cyclin partner interaction, phosphorylation by particular protein kinases, and de-phosphorylation by Cdc25 family phosphatases. In response to DNA damage, a cell's DNA repair reaction is a cascade of signaling pathways that leads to checkpoint engagement, regulates, the repairing mechanism in DNA, cell cycle alterations, and apoptosis. Numerous biochemical structures, as well as processes that detect damage in DNA, are ATM and ATR, which induce the DNA repair checkpoints
The cell cycle is a sequence of activities in which cell organelles are duplicated and subsequently separated into daughter cells with precision. There are major events that happen during a cell cycle. The processes that happen in the cell cycle include cell development, replication and segregation of chromosomes. The cell cycle checkpoints are surveillance systems that keep track of the cell cycle's integrity, accuracy, and chronology. Each checkpoint serves as an alternative cell cycle endpoint, wherein the cell's parameters are examined and only when desirable characteristics are fulfilled does the cell cycle advance through the distinct steps. The cell cycle's goal is to precisely copy each organism's DNA and afterwards equally split the cell and its components between the two new cells. Four main stages occur in the eukaryotes. In G1, the cell is usually active and continues to grow rapidly, while in G2, the cell growth continues while protein molecules become ready for separation. These are not dormant times; they are when cells gain mass, integrate growth factor receptors, establish a replicated genome, and prepare for chromosome segregation. DNA replication is restricted to a separate Synthesis in eukaryotes, which is also known as the S-phase. During mitosis, which is also known as the M-phase, the segregation of the chromosomes occur. DNA, like every other molecule, is capable of undergoing a wide range of chemical reactions. Modifications in DNA's sequence, on the other hand, have a considerably bigger impact than modifications in other cellular constituents like RNAs or proteins because DNA acts as a permanent copy of the cell genome. When erroneous nucleotides are incorporated during DNA replication, mutations can occur. The majority of DNA damage is fixed by removing the defective bases and then re-synthesizing the excised area. On the other hand, some DNA lesions can be mended by reversing the damage, which may be a more effective method of coping with common types of DNA damage. Only a few forms of DNA damage are mended in this fashion, including pyrimidine dimers caused by ultraviolet (UV) light changed by the insertion of methyl or ethyl groups at the purine ring's O6 position.
Mitochondrial membrane dynamics
Mitochondria are commonly referred to as the cell's "powerhouses" because of their capacity to effectively produce ATP which is essential to maintain cellular homeostasis and metabolism. Moreover, researchers have gained a better knowledge of mitochondria's significance in cell biology because of the discovery of cell signaling pathways by mitochondria which are crucial platforms for cell function regulation such as apoptosis. Its physiological adaptability is strongly linked to the cell mitochondrial channel's ongoing reconfiguration through a range of mechanisms known as mitochondrial membrane dynamics, including endomembrane fusion and fragmentation (separation) and ultrastructural membrane remodeling. As a result, mitochondrial dynamics regulate and frequently choreograph not only metabolic but also complicated cell signaling processes such as cell pluripotent stem cells, proliferation, maturation, aging, and mortality. Mutually, post-translational alterations of mitochondrial apparatus and the development of transmembrane contact sites among mitochondria and other structures, which both have the potential to link signals from diverse routes that affect mitochondrial membrane dynamics substantially, Mitochondria are wrapped by two membranes: an inner mitochondrial membrane (IMM) and an outer mitochondrial membrane (OMM), each with a distinctive function and structure, which parallels their dual role as cellular powerhouses and signaling organelles. The inner mitochondrial membrane divides the mitochondrial lumen into two parts: the inner border membrane, which runs parallel to the OMM, and the cristae, which are deeply twisted, multinucleated invaginations that give room for surface area enlargement and house the mitochondrial respiration apparatus. The outer mitochondrial membrane, on the other hand, is soft and permeable. It, therefore, acts as a foundation for cell signaling pathways to congregate, be deciphered, and be transported into mitochondria. Furthermore, the OMM connects to other cellular organelles, such as the endoplasmic reticulum (ER), lysosomes, endosomes, and the plasma membrane. Mitochondria play a wide range of roles in cell biology, which is reflected in their morphological diversity. Ever since the beginning of the mitochondrial study, it has been well documented that mitochondria can have a variety of forms, with both their general and ultra-structural morphology varying greatly among cells, during the cell cycle, and in response to metabolic or cellular cues. Mitochondria can exist as independent organelles or as part of larger systems; they can also be unequally distributed in the cytosol through regulated mitochondrial transport and placement to meet the cell's localized energy requirements. Mitochondrial dynamics refers to the adaptive and variable aspect of mitochondria, including their shape and subcellular distribution.
Autophagy
Autophagy is a self-degradative mechanism that regulates energy sources during growth and reaction to dietary stress. Autophagy also cleans up after itself, clearing aggregated proteins, cleaning damaged structures including mitochondria and endoplasmic reticulum and eradicating intracellular infections. Additionally, autophagy has antiviral and antibacterial roles within the cell, and it is involved at the beginning of distinctive and adaptive immune responses to viral and bacterial contamination. Some viruses include virulence proteins that prevent autophagy, while others utilize autophagy elements for intracellular development or cellular splitting. Macro autophagy, micro autophagy, and chaperon-mediated autophagy are the three basic types of autophagy. When macro autophagy is triggered, an exclusion membrane incorporates a section of the cytoplasm, generating the autophagosome, a distinctive double-membraned organelle. The autophagosome then joins the lysosome to create an autolysosome, with lysosomal enzymes degrading the components. In micro autophagy, the lysosome or vacuole engulfs a piece of the cytoplasm by invaginating or protruding the lysosomal membrane to enclose the cytosol or organelles. The chaperone-mediated autophagy (CMA) protein quality assurance by digesting oxidized and altered proteins under stressful circumstances and supplying amino acids through protein denaturation. Autophagy is the primary intrinsic degradative system for peptides, fats, carbohydrates, and other cellular structures. In both physiologic and stressful situations, this cellular progression is vital for upholding the correct cellular balance. Autophagy instability leads to a variety of illness symptoms, including inflammation, biochemical disturbances, aging, and neurodegenerative, due to its involvement in controlling cell integrity. The modification of the autophagy-lysosomal networks is a typical hallmark of many neurological and muscular illnesses. As a result, autophagy has been identified as a potential strategy for the prevention and treatment of various disorders. Many of these disorders are prevented or improved by consuming polyphenol in the meal. As a result, natural compounds with the ability to modify the autophagy mechanism are seen as a potential therapeutic option. The creation of the double membrane (phagophore), which would be known as nucleation, is the first step in macro-autophagy. The phagophore approach indicates dysregulated polypeptides or defective organelles that come from the cell membrane, Golgi apparatus, endoplasmic reticulum, and mitochondria. With the conclusion of the autophagocyte, the phagophore's enlargement comes to an end. The auto-phagosome combines with the lysosomal vesicles to formulate an auto-lysosome that degrades the encapsulated substances, referred to as phagocytosis.
Notable cell biologists
Jean Baptiste Carnoy
Peter Agre
Günter Blobel
Robert Brown
Geoffrey M. Cooper
Christian de Duve
Henri Dutrochet
Robert Hooke
H. Robert Horvitz
Marc Kirschner
Anton van Leeuwenhoek
Ira Mellman
Marta Miączyńska
Peter D. Mitchell
Rudolf Virchow
Paul Nurse
George Emil Palade
Keith R. Porter
Ray Rappaport
Michael Swann
Roger Tsien
Edmund Beecher Wilson
Kenneth R. Miller
Matthias Jakob Schleiden
Theodor Schwann
Yoshinori Ohsumi
Jan Evangelista Purkyně
| Biology and health sciences | Cell biology: General | null |
6355 | https://en.wikipedia.org/wiki/Chloroplast | Chloroplast | A chloroplast () is a type of organelle known as a plastid that conducts photosynthesis mostly in plant and algal cells. Chloroplasts have a high concentration of chlorophyll pigments which capture the energy from sunlight and convert it to chemical energy and release oxygen. The chemical energy created is then used to make sugar and other organic molecules from carbon dioxide in a process called the Calvin cycle. Chloroplasts carry out a number of other functions, including fatty acid synthesis, amino acid synthesis, and the immune response in plants. The number of chloroplasts per cell varies from one, in some unicellular algae, up to 100 in plants like Arabidopsis and wheat.
Chloroplasts are highly dynamic—they circulate and are moved around within cells. Their behavior is strongly influenced by environmental factors like light color and intensity. Chloroplasts cannot be made anew by the plant cell and must be inherited by each daughter cell during cell division, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell.
Chloroplasts evolved from an ancient cyanobacterium that was engulfed by an early eukaryotic cell. Because of their endosymbiotic origins, chloroplasts, like mitochondria, contain their own DNA separate from the cell nucleus. With one exception (the amoeboid Paulinella chromatophora), all chloroplasts can be traced back to a single endosymbiotic event. Despite this, chloroplasts can be found in extremely diverse organisms that are not directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events.
Discovery and etymology
The first definitive description of a chloroplast (Chlorophyllkörnen, "grain of chlorophyll") was given by Hugo von Mohl in 1837 as discrete bodies within the green plant cell. In 1883, Andreas Franz Wilhelm Schimper named these bodies as "chloroplastids" (Chloroplastiden). In 1884, Eduard Strasburger adopted the term "chloroplasts" (Chloroplasten).
The word chloroplast is derived from the Greek words chloros (χλωρός), which means green, and plastes (πλάστης), which means "the one who forms".
Endosymbiotic origin of chloroplasts
Chloroplasts are one of many types of organelles in photosynthetic eukaryotic cells. They evolved from cyanobacteria through a process called organellogenesis. Cyanobacteria are a diverse phylum of gram-negative bacteria capable of carrying out oxygenic photosynthesis. Like chloroplasts, they have thylakoids. The thylakoid membranes contain photosynthetic pigments, including chlorophyll a. This origin of chloroplasts was first suggested by the Russian biologist Konstantin Mereschkowski in 1905 after Andreas Franz Wilhelm Schimper observed in 1883 that chloroplasts closely resemble cyanobacteria. Chloroplasts are only found in plants, algae, and some species of the amoeboid Paulinella.
Mitochondria are thought to have come from a similar endosymbiosis event, where an aerobic prokaryote was engulfed.
Primary endosymbiosis
Approximately twobillion years ago, a free-living cyanobacterium entered an early eukaryotic cell, either as food or as an internal parasite, but managed to escape the phagocytic vacuole it was contained in and persist inside the cell. This event is called endosymbiosis, or "cell living inside another cell with a mutual benefit for both". The external cell is commonly referred to as the host while the internal cell is called the endosymbiont. The engulfed cyanobacteria provided an advantage to the host by providing sugar from photosynthesis. Over time, the cyanobacterium was assimilated, and many of its genes were lost or transferred to the nucleus of the host. Some of the cyanobacterial proteins were then synthesized by host cell and imported back into the chloroplast (formerly the cyanobacterium), allowing the host to control the chloroplast.
Chloroplasts which can be traced back directly to a cyanobacterial ancestor (i.e. without a subsequent endosymbiotic event) are known as primary plastids ("plastid" in this context means almost the same thing as chloroplast). Chloroplasts that can be traced back to another photosynthetic eukaryotic endosymbiont are called secondary plastids or tertiary plastids (discussed below).
Whether primary chloroplasts came from a single endosymbiotic event or multiple independent engulfments across various eukaryotic lineages was long debated. It is now generally held that with one exception (the amoeboid Paulinella chromatophora), chloroplasts arose from a single endosymbiotic event around twobillion years ago and these chloroplasts all share a single ancestor. It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is Gloeomargarita lithophora. Separately, somewhere about 90–140 million years ago, this process happened again in the amoeboid Paulinella with a cyanobacterium in the genus Prochlorococcus. This independently evolved chloroplast is often called a chromatophore instead of a chloroplast.
Chloroplasts are believed to have arisen after mitochondria, since all eukaryotes contain mitochondria, but not all have chloroplasts. This is called serial endosymbiosis—where an early eukaryote engulfed the mitochondrion ancestor, and then descendants of it then engulfed the chloroplast ancestor, creating a cell with both chloroplasts and mitochondria.
Secondary and tertiary endosymbiosis
Many other organisms obtained chloroplasts from the primary chloroplast lineages through secondary endosymbiosis—engulfing a red or green alga with a primary chloroplast. These chloroplasts are known as secondary plastids.
As a result of the secondary endosymbiotic event, secondary chloroplasts have additional membranes outside of the original two in primary chloroplasts. In secondary plastids, typically only the chloroplast, and sometimes its cell membrane and nucleus remain, forming a chloroplast with three or four membranes—the two cyanobacterial membranes, sometimes the eaten alga's cell membrane, and the phagosomal vacuole from the host's cell membrane.
The genes in the phagocytosed eukaryote's nucleus are often transferred to the secondary host's nucleus. Cryptomonads and chlorarachniophytes retain the phagocytosed eukaryote's nucleus, an object called a nucleomorph, located between the second and third membranes of the chloroplast.
All secondary chloroplasts come from green and red algae. No secondary chloroplasts from glaucophytes have been observed, probably because glaucophytes are relatively rare in nature, making them less likely to have been taken up by another eukaryote.
Still other organisms, including the dinoflagellates Karlodinium and Karenia, obtained chloroplasts by engulfing an organism with a secondary plastid. These are called tertiary plastids.
Primary chloroplast lineages
All primary chloroplasts belong to one of four chloroplast lineages—the glaucophyte chloroplast lineage, the rhodophyte ("red") chloroplast lineage, and the chloroplastidan ("green") chloroplast lineage, the amoeboid Paulinella chromatophora lineage. The glaucophyte, rhodophyte, and chloroplastidian lineages are all descended from the same ancestral endosymbiotic event and are all within the group Archaeplastida.
Glaucophyte chloroplasts
The glaucophyte chloroplast group is the smallest of the three primary chloroplast lineages as there are only 25 described glaucophyte species. Glaucophytes diverged first before the red and green chloroplast lineages diverged. Because of this, they are sometimes considered intermediates between cyanobacteria and the red and green chloroplasts. This early divergence is supported by both phylogenetic studies and physical features present in glaucophyte chloroplasts and cyanobacteria, but not the red and green chloroplasts. First, glaucophyte chloroplasts have a peptidoglycan wall, a type of cell wall otherwise only in bacteria (including cyanobacteria). Second, glaucophyte chloroplasts contain concentric unstacked thylakoids which surround a carboxysome – an icosahedral structure that contains the enzyme RuBisCO responsible for carbon fixation. Third, starch created by the chloroplast is collected outside the chloroplast. Additionally, like cyanobacteria, both glaucophyte and rhodophyte thylakoids are studded with light collecting structures called phycobilisomes.
Rhodophyta (red chloroplasts)
The rhodophyte, or red algae, group is a large and diverse lineage. Rhodophyte chloroplasts are also called rhodoplasts, literally "red chloroplasts". Rhodoplasts have a double membrane with an intermembrane space and phycobilin pigments organized into phycobilisomes on the thylakoid membranes, preventing their thylakoids from stacking. Some contain pyrenoids. Rhodoplasts have chlorophyll a and phycobilins for photosynthetic pigments; the phycobilin phycoerythrin is responsible for giving many red algae their distinctive red color. However, since they also contain the blue-green chlorophyll a and other pigments, many are reddish to purple from the combination. The red phycoerytherin pigment is an adaptation to help red algae catch more sunlight in deep water—as such, some red algae that live in shallow water have less phycoerythrin in their rhodoplasts, and can appear more greenish. Rhodoplasts synthesize a form of starch called floridean starch, which collects into granules outside the rhodoplast, in the cytoplasm of the red alga.
Chloroplastida (green chloroplasts)
The chloroplastida group is another large, highly diverse lineage that includes both green algae and land plants. This group is also called Viridiplantae, which includes two core clades—Chlorophyta and Streptophyta.
Most green chloroplasts are green in color, though some aren't due to accessory pigments that override the green from chlorophylls, such as in the resting cells of Haematococcus pluvialis. Green chloroplasts differ from glaucophyte and red algal chloroplasts in that they have lost their phycobilisomes, and contain chlorophyll b. They have also lost the peptidoglycan wall between their double membrane, leaving an intermembrane space. Some plants have kept some genes required the synthesis of peptidoglycan, but have repurposed them for use in chloroplast division instead. Chloroplastida lineages also keep their starch inside their chloroplasts. In plants and some algae, the chloroplast thylakoids are arranged in grana stacks. Some green algal chloroplasts, as well as those of hornworts, contain a structure called a pyrenoid, that concentrate RuBisCO and CO in the chloroplast, functionally similar to the glaucophyte carboxysome.
There are some lineages of non-photosynthetic parasitic green algae that have lost their chloroplasts entirely, such as Prototheca, or have no chloroplast while retaining the separate chloroplast genome, as in Helicosporidium. Morphological and physiological similarities, as well as phylogenetics, confirm that these are lineages that ancestrally had chloroplasts but have since lost them.
Paulinella chromatophora
The photosynthetic amoeboids in the genus Paulinella—P. chromatophora, P. micropora, and marine P. longichromatophora—have the only known independently evolved chloroplast, often called a chromatophore. While all other chloroplasts originate from a single ancient endosymbiotic event, Paulinella independently acquired an endosymbiotic cyanobacterium from the genus Synechococcus around 90 - 140 million years ago. Each Paulinella cell contains one or two sausage-shaped chloroplasts; they were first described in 1894 by German biologist Robert Lauterborn.
The chromatophore is highly reduced compared to its free-living cyanobacterial relatives and has limited functions. For example, it has a genome of about 1 million base pairs, one third the size of Synechococcus genomes, and only encodes around 850 proteins. However, this is still much larger than other chloroplast genomes, which are typically around 150,000 base pairs. Chromatophores have also transferred much less of their DNA to the nucleus of their hosts. About 0.3–0.8% of the nuclear DNA in Paulinella is from the chromatophore, compared with 11–14% from the chloroplast in plants. Similar to other chloroplasts, Paulinella provides specific proteins to the chromatophore using a specific targeting sequence. Because chromatophores are much younger compared to the canoncial chloroplasts, Paulinella chromatophora is studied to understand how early chloroplasts evolved.
Secondary and tertiary chloroplast lineages
Green algal derived chloroplasts
Green algae have been taken up by many groups in three or four separate events. Primarily, secondary chloroplasts derived from green algae are in the euglenids and chlorarachniophytes. They are also found in one lineage of dinoflagellates and possibly the ancestor of the CASH lineage (cryptomonads, alveolates, stramenopiles and haptophytes) Many green algal derived chloroplasts contain pyrenoids, but unlike chloroplasts in their green algal ancestors, storage product collects in granules outside the chloroplast.
Euglenophytes
The euglenophytes are a group of common flagellated protists that contain chloroplasts derived from a green alga. Euglenophytes are the only group outside Diaphoretickes that have chloroplasts without performing kleptoplasty. Euglenophyte chloroplasts have three membranes. It is thought that the membrane of the primary endosymbiont host was lost (e.g. the green algal membrane), leaving the two cyanobacterial membranes and the secondary host's phagosomal membrane. Euglenophyte chloroplasts have a pyrenoid and thylakoids stacked in groups of three. The carbon fixed through photosynthesis is stored in the form of paramylon, which is contained in membrane-bound granules in the cytoplasm of the euglenophyte.
Chlorarachniophytes
Chlorarachniophytes are a rare group of organisms that also contain chloroplasts derived from green algae, though their story is more complicated than that of the euglenophytes. The ancestor of chlorarachniophytes is thought to have been a eukaryote with a red algal derived chloroplast. It is then thought to have lost its first red algal chloroplast, and later engulfed a green alga, giving it its second, green algal derived chloroplast.
Chlorarachniophyte chloroplasts are bounded by four membranes, except near the cell membrane, where the chloroplast membranes fuse into a double membrane. Their thylakoids are arranged in loose stacks of three. Chlorarachniophytes have a form of polysaccharide called chrysolaminarin, which they store in the cytoplasm, often collected around the chloroplast pyrenoid, which bulges into the cytoplasm.
Chlorarachniophyte chloroplasts are notable because the green alga they are derived from has not been completely broken down—its nucleus still persists as a nucleomorph found between the second and third chloroplast membranes—the periplastid space, which corresponds to the green alga's cytoplasm.
Prasinophyte-derived chloroplast
Dinoflagellates in the genus Lepidodinium have lost their original peridinin chloroplast and replaced it with a green algal derived chloroplast (more specifically, a prasinophyte). Lepidodinium is the only dinoflagellate that has a chloroplast that's not from the rhodoplast lineage. The chloroplast is surrounded by two membranes and has no nucleomorph—all the nucleomorph genes have been transferred to the dinophyte nucleus. The endosymbiotic event that led to this chloroplast was serial secondary endosymbiosis rather than tertiary endosymbiosis—the endosymbiont was a green alga containing a primary chloroplast (making a secondary chloroplast).
Red algal derived chloroplasts
Secondary chloroplasts derived from red algae appear to have only been taken up only once, which then diversified into a large group called chromalveolates. Today they are found in the haptophytes, cryptomonads, heterokonts, dinoflagellates and apicomplexans (the CASH lineage). Red algal secondary chloroplasts usually contain chlorophyll c and are surrounded by four membranes.
Cryptophytes
Cryptophytes, or cryptomonads, are a group of algae that contain a red-algal derived chloroplast. Cryptophyte chloroplasts contain a nucleomorph that superficially resembles that of the chlorarachniophytes. Cryptophyte chloroplasts have four membranes. The outermost membrane is continuous with the rough endoplasmic reticulum. They synthesize ordinary starch, which is stored in granules found in the periplastid space—outside the original double membrane, in the place that corresponds to the ancestral red alga's cytoplasm. Inside cryptophyte chloroplasts is a pyrenoid and thylakoids in stacks of two. Cryptophyte chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in the thylakoid space, rather than anchored on the outside of their thylakoid membranes.
Cryptophytes may have played a key role in the spreading of red algal based chloroplasts.
Haptophytes
Haptophytes are similar and closely related to cryptophytes or heterokontophytes. Their chloroplasts lack a nucleomorph, their thylakoids are in stacks of three, and they synthesize chrysolaminarin sugar, which are stored in granules completely outside of the chloroplast, in the cytoplasm of the haptophyte.
Stramenopiles (heterokontophytes)
The stramenopiles, also known as heterokontophytes, are a very large and diverse group of eukaryotes. It inlcludes Ochrophyta—which includes diatoms, brown algae (seaweeds), and golden algae (chrysophytes)— and Xanthophyceae (also called yellow-green algae).
Heterokont chloroplasts are very similar to haptophyte chloroplasts. They have a pyrenoid, triplet thylakoids, and, with some exceptions, four layer plastidic envelope with the outermost membrane connected to the endoplasmic reticulum. Like haptophytes, stramenopiles store sugar in chrysolaminarin granules in the cytoplasm. Stramenopile chloroplasts contain chlorophyll a and, with a few exceptions, chlorophyll c. They also have carotenoids which give them their many colors.
Apicomplexans, chromerids, and dinophytes
The alveolates are a major clade of unicellular eukaryotes of both autotrophic and heterotrophic members. Many members contain a red-algal derived plastid. One notable characteristic of this diverse group is the frequent loss of photosynthesis. However, a majority of these heterotrophs continue to process a non-photosynthetic plastid.
Apicomplexans
Apicomplexans are a group of alveolates. Like the helicosproidia, they're parasitic, and have a nonphotosynthetic chloroplast. They were once thought to be related to the helicosproidia, but it is now known that the helicosproida are green algae rather than part of the CASH lineage. The apicomplexans include Plasmodium, the malaria parasite. Many apicomplexans keep a vestigial red algal derived chloroplast called an apicoplast, which they inherited from their ancestors. Apicoplasts have lost all photosynthetic function, and contain no photosynthetic pigments or true thylakoids. They are bounded by four membranes, but the membranes are not connected to the endoplasmic reticulum. Other apicomplexans like Cryptosporidium have lost the chloroplast completely. Apicomplexans store their energy in amylopectin granules that are located in their cytoplasm, even though they are nonphotosynthetic.
The fact that apicomplexans still keep their nonphotosynthetic chloroplast around demonstrates how the chloroplast carries out important functions other than photosynthesis. Plant chloroplasts provide plant cells with many important things besides sugar, and apicoplasts are no different—they synthesize fatty acids, isopentenyl pyrophosphate, iron-sulfur clusters, and carry out part of the heme pathway. The most important apicoplast function is isopentenyl pyrophosphate synthesis—in fact, apicomplexans die when something interferes with this apicoplast function, and when apicomplexans are grown in an isopentenyl pyrophosphate-rich medium, they dump the organelle.
Chromerids
The Chromerida is a newly discovered group of algae from Australian corals which comprises some close photosynthetic relatives of the apicomplexans. The first member, Chromera velia, was discovered and first isolated in 2001. The discovery of Chromera velia with similar structure to the apicomplexans, provides an important link in the evolutionary history of the apicomplexans and dinophytes. Their plastids have four membranes, lack chlorophyll c and use the type II form of RuBisCO obtained from a horizontal transfer event.
Dinoflagellates
The dinoflagellates are yet another very large and diverse group, around half of which are at least partially photosynthetic (i.e. mixotrophic). Dinoflagellate chloroplasts have relatively complex history. Most dinoflagellate chloroplasts are secondary red algal derived chloroplasts. Many dinoflagellates have lost the chloroplast (becoming nonphotosynthetic), some of these have replaced it though tertiary endosymbiosis. Others replaced their original chloroplast with a green algal derived chloroplast. The peridinin chloroplast is thought to be the dinophytes' "original" chloroplast, which has been lost, reduced, replaced, or has company in several other dinophyte lineages.
The most common dinophyte chloroplast is the peridinin-type chloroplast, characterized by the carotenoid pigment peridinin in their chloroplasts, along with chlorophyll a and chlorophyll c2. Peridinin is not found in any other group of chloroplasts. The peridinin chloroplast is bounded by three membranes (occasionally two), having lost the red algal endosymbiont's original cell membrane. The outermost membrane is not connected to the endoplasmic reticulum. They contain a pyrenoid, and have triplet-stacked thylakoids. Starch is found outside the chloroplast. Peridinin chloroplasts also have DNA that is highly reduced and fragmented into many small circles. Most of the genome has migrated to the nucleus, and only critical photosynthesis-related genes remain in the chloroplast.
Most dinophyte chloroplasts contain form II RuBisCO, at least the photosynthetic pigments chlorophyll a, chlorophyll c2, beta-carotene, and at least one dinophyte-unique xanthophyll (peridinin, dinoxanthin, or diadinoxanthin), giving many a golden-brown color. All dinophytes store starch in their cytoplasm, and most have chloroplasts with thylakoids arranged in stacks of three.
Tertiary chloroplasts (haptophyte-derived)
The fucoxanthin dinophyte lineages (including Karlodinium and Karenia) lost their original red algal derived chloroplast, and replaced it with a new chloroplast derived from a haptophyte endosymbiont, making these tertiary plastids. Karlodinium and Karenia probably took up different heterokontophytes. Because the haptophyte chloroplast has four membranes, tertiary endosymbiosis would be expected to create a six membraned chloroplast, adding the haptophyte's cell membrane and the dinophyte's phagosomal vacuole. However, the haptophyte was heavily reduced, stripped of a few membranes and its nucleus, leaving only its chloroplast (with its original double membrane), and possibly one or two additional membranes around it.
Fucoxanthin-containing chloroplasts are characterized by having the pigment fucoxanthin (actually 19′-hexanoyloxy-fucoxanthin and/or 19′-butanoyloxy-fucoxanthin) and no peridinin. Fucoxanthin is also found in haptophyte chloroplasts, providing evidence of ancestry.
"Dinotoms" diatom-derived dinophyte chloroplasts
Some dinophytes, like Kryptoperidinium and Durinskia, have a diatom (heterokontophyte)-derived chloroplast. These chloroplasts are bounded by up to five membranes, (depending on whether the entire diatom endosymbiont is counted as the chloroplast, or just the red algal derived chloroplast inside it). The diatom endosymbiont has been reduced relatively little—it still retains its original mitochondria, and has endoplasmic reticulum, ribosomes, a nucleus, and of course, red algal derived chloroplasts—practically a complete cell, all inside the host's endoplasmic reticulum lumen. However the diatom endosymbiont can't store its own food—its storage polysaccharide is found in granules in the dinophyte host's cytoplasm instead. The diatom endosymbiont's nucleus is present, but it probably can't be called a nucleomorph because it shows no sign of genome reduction, and might have even been expanded. Diatoms have been engulfed by dinoflagellates at least three times.
The diatom endosymbiont is bounded by a single membrane, inside it are chloroplasts with four membranes. Like the diatom endosymbiont's diatom ancestor, the chloroplasts have triplet thylakoids and pyrenoids.
In some of these genera, the diatom endosymbiont's chloroplasts aren't the only chloroplasts in the dinophyte. The original three-membraned peridinin chloroplast is still around, converted to an eyespot.
Kleptoplasty
In some groups of mixotrophic protists, like some dinoflagellates (e.g. Dinophysis), chloroplasts are separated from a captured alga and used temporarily. These klepto chloroplasts may only have a lifetime of a few days and are then replaced.
Cryptophyte-derived dinophyte chloroplast
Members of the genus Dinophysis have a phycobilin-containing chloroplast taken from a cryptophyte. However, the cryptophyte is not an endosymbiont—only the chloroplast seems to have been taken, and the chloroplast has been stripped of its nucleomorph and outermost two membranes, leaving just a two-membraned chloroplast. Cryptophyte chloroplasts require their nucleomorph to maintain themselves, and Dinophysis species grown in cell culture alone cannot survive, so it is possible (but not confirmed) that the Dinophysis chloroplast is a kleptoplast—if so, Dinophysis chloroplasts wear out and Dinophysis species must continually engulf cryptophytes to obtain new chloroplasts to replace the old ones.
Chloroplast DNA
Chloroplasts, like other endosymbiotic organelles, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA (cpDNA) was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. Chloroplast DNA was first sequenced in 1986. Since then, hundreds of chloroplast genomes from various species have been sequenced, but they are mostly those of land plants and green algae—glaucophytes, red algae, and other algal groups are extremely underrepresented, potentially introducing some bias in views of "typical" chloroplast DNA structure and content.
Molecular structure
With few exceptions, most chloroplasts have their entire chloroplast genome combined into a single large circular DNA molecule, typically 120,000–170,000 base pairs long and a mass of about 80–130 million daltons. While chloroplast genomes can almost always be assembled into a circular map, the physical DNA molecules inside cells take on a variety of linear and branching forms. New chloroplasts may contain up to 100 copies of their genome, though the number of copies decreases to about 15–20 as the chloroplasts age.
Chloroplast DNA is usually condensed into nucleoids, which can contain multiple copies of the chloroplast genome. Many nucleoids can be found in each chloroplast. In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of the chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma. Chloroplast DNA is not associated with true histones, proteins that are used to pack DNA molecules tightly in eukaryote nuclei. Though in red algae, similar proteins tightly pack each chloroplast DNA ring in a nucleoid.
Many chloroplast genomes contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC). A given pair of inverted repeats are rarely identical, but they are always very similar to each other, apparently resulting from concerted evolution. The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each and containing as few as four or as many as over 150 genes. The inverted repeat regions are highly conserved in land plants, and accumulate few mutations.
Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceae), suggesting that they predate the chloroplast. Some chloroplast genomes have since lost or flipped the inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast genomes which have lost some of the inverted repeat segments tend to get rearranged more.
DNA repair and replication
In chloroplasts of the moss Physcomitrella patens, the DNA mismatch repair protein Msh1 interacts with the recombinational repair proteins RecA and RecG to maintain chloroplast genome stability. In chloroplasts of the plant Arabidopsis thaliana the RecA protein maintains the integrity of the chloroplast's DNA by a process that likely involves the recombinational repair of DNA damage.
The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Transcription starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to transcribe the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes.
In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine. Hypoxanthine can bind to cytosine, and when the XC base pair is replicated, it becomes a GC (thus, an A → G base change).
In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures.
One of competing model for cpDNA replication asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to the linear and circular DNA structures of bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more species still contain complex structures that scientists do not yet understand. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not expand the multiple A --> G gradients seen in plastomes. Because of the failure to explain the deamination gradient as well as the numerous plant species that have been shown to have circular cpDNA, the predominant theory continues to hold that most cpDNA is circular and most likely replicates via a D loop mechanism.
Gene content and protein synthesis
The ancestral cyanobacteria that led to chloroplasts probably had a genome that contained over 3000 genes, but only approximately 100 genes remain in contemporary chloroplast genomes. These genes code for a variety of things, mostly to do with the protein pipeline and photosynthesis. As in prokaryotes, genes in chloroplast DNA are organized into operons. Unlike prokaryotic DNA molecules, chloroplast DNA molecules contain introns (plant mitochondrial DNAs do too, but not human mtDNAs).
Among land plants, the contents of the chloroplast genome are fairly similar.
Chloroplast genome reduction and gene transfer
Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer. As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. Recently, a plastid without a genome was found, demonstrating chloroplasts can lose their genome during endosymbiotic the gene transfer process.
Endosymbiotic gene transfer is how we know about the lost chloroplasts in many CASH lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast.
In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants.
Of the approximately 3000 proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling. Recent research indicates that parts of the retrograde signaling network once considered characteristic for land plants emerged already in an algal progenitor, integrating into co-expressed cohorts of genes in the closest algal relatives of land plants.
Protein synthesis
Protein synthesis within chloroplasts relies on two RNA polymerases. One is coded by the chloroplast DNA, the other is of nuclear origin. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes.
Protein targeting and import
Because so many chloroplast genes have been moved to the nucleus, many proteins that would originally have been translated in the chloroplast are now synthesized in the cytoplasm of the plant cell. These proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes.
Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway, though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell because to reach the chloroplast from the cytosol, the cell membrane must be crossed, which signifies entrance into the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway.
Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle.
In most, but not all cases, nuclear-encoded chloroplast proteins are translated with a cleavable transit peptide that's added to the N-terminus of the protein precursor. Sometimes the transit sequence is found on the C-terminus of the protein, or within the functional part of the protein.
Transport proteins and membrane translocons
After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, an enzyme specific to chloroplast proteins phosphorylates, or adds a phosphate group to many (but not all) of them in their transit sequences.
Phosphorylation helps many proteins bind the polypeptide, keeping it from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized by the chloroplast. These proteins also help the polypeptide get imported into the chloroplast.
From here, chloroplast proteins bound for the stroma must pass through two protein complexes—the TOC complex, or translocon on the outer chloroplast membrane, and the TIC translocon, or translocon on the inner chloroplast membrane translocon. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space.
Structure
In land plants, chloroplasts are generally lens-shaped, 3–10 μm in diameter and 1–3 μm thick. Corn seedling chloroplasts are ≈20 μm3 in volume. Greater diversity in chloroplast shapes exists among the algae, which often contain a single chloroplast that can be shaped like a net (e.g., Oedogonium), a cup (e.g., Chlamydomonas), a ribbon-like spiral around the edges of the cell (e.g., Spirogyra), or slightly twisted bands at the cell edges (e.g., Sirogonium). Some algae have two chloroplasts in each cell; they are star-shaped in Zygnema, or may follow the shape of half the cell in order Desmidiales. In some algae, the chloroplast takes up most of the cell, with pockets for the nucleus and other organelles, for example, some species of Chlorella have a cup-shaped chloroplast that occupies much of the cell.
All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. The two innermost lipid-bilayer membranes that surround all chloroplasts correspond to the outer and inner membranes of the ancestral cyanobacterium's gram negative cell wall, and not the phagosomal membrane from the host, which was probably lost. Chloroplasts that are the product of secondary endosymbiosis may have additional membranes surrounding these three. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats.
There are some common misconceptions about the outer and inner chloroplast membranes. The fact that chloroplasts are surrounded by a double membrane is often cited as evidence that they are the descendants of endosymbiotic cyanobacteria. This is often interpreted as meaning the outer chloroplast membrane is the product of the host's cell membrane infolding to form a vesicle to surround the ancestral cyanobacterium—which is not true—both chloroplast membranes are homologous to the cyanobacterium's original double membranes.
The chloroplast double membrane is also often compared to the mitochondrial double membrane. This is not a valid comparison—the inner mitochondria membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. The only chloroplast structure that can considered analogous to it is the internal thylakoid system. Even so, in terms of "in-out", the direction of chloroplast H ion flow is in the opposite direction compared to oxidative phosphorylation in mitochondria. In addition, in terms of function, the inner chloroplast membrane, which regulates metabolite passage and synthesizes some materials, has no counterpart in the mitochondrion.
Outer chloroplast membrane
The outer chloroplast membrane is a semi-porous membrane that small molecules and ions can easily diffuse across. However, it is not permeable to larger proteins, so chloroplast polypeptides being synthesized in the cell cytoplasm must be transported across the outer chloroplast membrane by the TOC complex, or translocon on the outer chloroplast membrane.
The chloroplast membranes sometimes protrude out into the cytoplasm, forming a stromule, or stroma-containing tubule. Stromules are very rare in chloroplasts, and are much more common in other plastids like chromoplasts and amyloplasts in petals and roots, respectively. They may exist to increase the chloroplast's surface area for cross-membrane transport, because they are often branched and tangled with the endoplasmic reticulum. When they were first observed in 1962, some plant biologists dismissed the structures as artifactual, claiming that stromules were just oddly shaped chloroplasts with constricted regions or dividing chloroplasts. However, there is a growing body of evidence that stromules are functional, integral features of plant cell plastids, not merely artifacts.
Intermembrane space and peptidoglycan wall
Usually, a thin intermembrane space about 10–20 nanometers thick exists between the outer and inner chloroplast membranes.
Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called muroplasts (from Latin "mura", meaning "wall"). Other chloroplasts were assumed to have lost the cyanobacterial wall, leaving an intermembrane space between the two chloroplast envelope membranes, but has since been found also in moss, lycophytes and ferns.
Inner chloroplast membrane
The inner chloroplast membrane borders the stroma and regulates passage of materials in and out of the chloroplast. After passing through the TOC complex in the outer chloroplast membrane, polypeptides must pass through the TIC complex (translocon on the inner chloroplast membrane) which is located in the inner chloroplast membrane.
In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized.
Peripheral reticulum
Some chloroplasts contain a structure called the chloroplast peripheral reticulum. It is often found in the chloroplasts of plants, though it has also been found in some angiosperms, and even some gymnosperms. The chloroplast peripheral reticulum consists of a maze of membranous tubes and vesicles continuous with the inner chloroplast membrane that extends into the internal stromal fluid of the chloroplast. Its purpose is thought to be to increase the chloroplast's surface area for cross-membrane transport between its stroma and the cell cytoplasm. The small vesicles sometimes observed may serve as transport vesicles to shuttle stuff between the thylakoids and intermembrane space.
Stroma
The protein-rich, alkaline, aqueous fluid within the inner chloroplast membrane and outside of the thylakoid space is called the stroma, which corresponds to the cytosol of the original cyanobacterium. Nucleoids of chloroplast DNA, chloroplast ribosomes, the thylakoid system with plastoglobuli, starch granules, and many proteins can be found floating around in it. The Calvin cycle, which fixes CO into G3P takes place in the stroma.
Chloroplast ribosomes
Chloroplasts have their own ribosomes, which they use to synthesize a small fraction of their proteins. Chloroplast ribosomes are about two-thirds the size of cytoplasmic ribosomes (around 17 nm vs 25 nm). They take mRNAs transcribed from the chloroplast DNA and translate them into protein. While similar to bacterial ribosomes, chloroplast translation is more complex than in bacteria, so chloroplast ribosomes include some chloroplast-unique features.
Small subunit ribosomal RNAs in several Chlorophyta and euglenid chloroplasts lack motifs for Shine-Dalgarno sequence recognition, which is considered essential for translation initiation in most chloroplasts and prokaryotes. Such loss is also rarely observed in other plastids and prokaryotes. An additional 4.5S rRNA with homology to the 3' tail of 23S is found in "higher" plants.
Plastoglobuli
Plastoglobuli (singular plastoglobulus, sometimes spelled plastoglobule(s)), are spherical bubbles of lipids and proteins about 45–60 nanometers across. They are surrounded by a lipid monolayer. Plastoglobuli are found in all chloroplasts, but become more common when the chloroplast is under oxidative stress, or when it ages and transitions into a gerontoplast. Plastoglobuli also exhibit a greater size variation under these conditions. They are also common in etioplasts, but decrease in number as the etioplasts mature into chloroplasts.
Plastoglobuli contain both structural proteins and enzymes involved in lipid synthesis and metabolism. They contain many types of lipids including plastoquinone, vitamin E, carotenoids and chlorophylls.
Plastoglobuli were once thought to be free-floating in the stroma, but it is now thought that they are permanently attached either to a thylakoid or to another plastoglobulus attached to a thylakoid, a configuration that allows a plastoglobulus to exchange its contents with the thylakoid network. In normal green chloroplasts, the vast majority of plastoglobuli occur singularly, attached directly to their parent thylakoid. In old or stressed chloroplasts, plastoglobuli tend to occur in linked groups or chains, still always anchored to a thylakoid.
Plastoglobuli form when a bubble appears between the layers of the lipid bilayer of the thylakoid membrane, or bud from existing plastoglobuli—though they never detach and float off into the stroma. Practically all plastoglobuli form on or near the highly curved edges of the thylakoid disks or sheets. They are also more common on stromal thylakoids than on granal ones.
Starch granules
Starch granules are very common in chloroplasts, typically taking up 15% of the organelle's volume, though in some other plastids like amyloplasts, they can be big enough to distort the shape of the organelle. Starch granules are simply accumulations of starch in the stroma, and are not bounded by a membrane.
Starch granules appear and grow throughout the day, as the chloroplast synthesizes sugars, and are consumed at night to fuel respiration and continue sugar export into the phloem, though in mature chloroplasts, it is rare for a starch granule to be completely consumed or for a new granule to accumulate.
Starch granules vary in composition and location across different chloroplast lineages. In red algae, starch granules are found in the cytoplasm rather than in the chloroplast. In plants, mesophyll chloroplasts, which do not synthesize sugars, lack starch granules.
RuBisCO
The chloroplast stroma contains many proteins, though the most common and important is RuBisCO, which is probably also the most abundant protein on the planet. RuBisCO is the enzyme that fixes CO into sugar molecules. In plants, RuBisCO is abundant in all chloroplasts, though in plants, it is confined to the bundle sheath chloroplasts, where the Calvin cycle is carried out in plants.
Pyrenoids
The chloroplasts of some hornworts and algae contain structures called pyrenoids. They are not found in higher plants. Pyrenoids are roughly spherical and highly refractive bodies which are a site of starch accumulation in plants that contain them. They consist of a matrix opaque to electrons, surrounded by two hemispherical starch plates. The starch is accumulated as the pyrenoids mature. In algae with carbon concentrating mechanisms, the enzyme RuBisCO is found in the pyrenoids. Starch can also accumulate around the pyrenoids when CO2 is scarce. Pyrenoids can divide to form new pyrenoids, or be produced "de novo".
Thylakoid system
Thylakoids (sometimes spelled thylakoïds), are small interconnected sacks which contain the membranes that the light reactions of photosynthesis take place on. The word thylakoid comes from the Greek word thylakos which means "sack".
Suspended within the chloroplast stroma is the thylakoid system, a highly dynamic collection of membranous sacks called thylakoids where chlorophyll is found and the light reactions of photosynthesis happen.
In most vascular plant chloroplasts, the thylakoids are arranged in stacks called grana, though in certain plant chloroplasts and some algal chloroplasts, the thylakoids are free floating.
Thylakoid structure
Using a light microscope, it is just barely possible to see tiny green granules—which were named grana. With electron microscopy, it became possible to see the thylakoid system in more detail, revealing it to consist of stacks of flat thylakoids which made up the grana, and long interconnecting stromal thylakoids which linked different grana.
In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick.
The three-dimensional structure of the thylakoid membrane system has been disputed. Many models have been proposed, the most prevalent being the helical model, in which granum stacks of thylakoids are wrapped by helical stromal thylakoids. Another model known as the 'bifurcation model', which was based on the first electron tomography study of plant thylakoid membranes, depicts the stromal membranes as wide lamellar sheets perpendicular to the grana columns which bifurcates into multiple parallel discs forming the granum-stroma assembly. The helical model was supported by several additional works, but ultimately it was determined in 2019 that features from both the helical and bifurcation models are consolidated by newly discovered left-handed helical membrane junctions. Likely for ease, the thylakoid system is still commonly depicted by older "hub and spoke" models where the grana are connected to each other by tubes of stromal thylakoids.
Grana consist of a stacks of flattened circular granal thylakoids that resemble pancakes. Each granum can contain anywhere from two to a hundred thylakoids, though grana with 10–20 thylakoids are most common. Wrapped around the grana are multiple parallel right-handed helical stromal thylakoids, also known as frets or lamellar thylakoids. The helices ascend at an angle of ~20°, connecting to each granal thylakoid at a bridge-like slit junction.
The stroma lamellae extend as large sheets perpendicular to the grana columns. These sheets are connected to the right-handed helices either directly or through bifurcations that form left-handed helical membrane surfaces. The left-handed helical surfaces have a similar tilt angle to the right-handed helices (~20°), but ¼ the pitch. Approximately 4 left-handed helical junctions are present per granum, resulting in a pitch-balanced array of right- and left-handed helical membrane surfaces of different radii and pitch that consolidate the network with minimal surface and bending energies. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth.
Thylakoid composition
Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. Photosystem II and photosystem I contain light-harvesting complexes with chlorophyll and carotenoids that absorb light energy and use it to energize electrons. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. ATP synthase is a large protein complex that harnesses the concentration gradient of the hydrogen ions in the thylakoid space to generate ATP energy as the hydrogen ions flow back out into the stroma—much like a dam turbine.
There are two types of thylakoids—granal thylakoids, which are arranged in grana, and stromal thylakoids, which are in contact with the stroma. Granal thylakoids are pancake-shaped circular disks about 300–600 nanometers in diameter. Stromal thylakoids are helicoid sheets that spiral around grana. The flat tops and bottoms of granal thylakoids contain only the relatively flat photosystem II protein complex. This allows them to stack tightly, forming grana with many layers of tightly appressed membrane, called granal membrane, increasing stability and surface area for light capture.
In contrast, photosystem I and ATP synthase are large protein complexes which jut out into the stroma. They can't fit in the appressed granal membranes, and so are found in the stromal thylakoid membrane—the edges of the granal thylakoid disks and the stromal thylakoids. These large protein complexes may act as spacers between the sheets of stromal thylakoids.
The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal.
Pigments and chloroplast colors
Inside the photosystems embedded in chloroplast thylakoid membranes are various photosynthetic pigments, which absorb and transfer light energy. The types of pigments found are different in various groups of chloroplasts, and are responsible for a wide variety of chloroplast colorations. Other plastid types, such as the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis.
Paper chromatography of some spinach leaf extract shows the various pigments present in their chloroplasts.
Xanthophylls
Chlorophyll a
Chlorophyll b
Chlorophylls
Chlorophyll a is found in all chloroplasts, as well as their cyanobacterial ancestors. Chlorophyll a is a blue-green pigment partially responsible for giving most cyanobacteria and chloroplasts their color. Other forms of chlorophyll exist, such as the accessory pigments chlorophyll b, chlorophyll c, chlorophyll d, and chlorophyll f.
Chlorophyll b is an olive green pigment found only in the chloroplasts of plants, green algae, any secondary chloroplasts obtained through the secondary endosymbiosis of a green alga, and a few cyanobacteria. It is the chlorophylls a and b together that make most plant and green algal chloroplasts green.
Chlorophyll c is mainly found in secondary endosymbiotic chloroplasts that originated from a red alga, although it is not found in chloroplasts of red algae themselves. Chlorophyll c is also found in some green algae and cyanobacteria.
Chlorophylls d and f are pigments found only in some cyanobacteria.
Carotenoids
In addition to chlorophylls, another group of yellow–orange pigments called carotenoids are also found in the photosystems. There are about thirty photosynthetic carotenoids. They help transfer and dissipate excess energy, and their bright colors sometimes override the chlorophyll green, like during the fall, when the leaves of some land plants change color. β-carotene is a bright red-orange carotenoid found in nearly all chloroplasts, like chlorophyll a. Xanthophylls, especially the orange-red zeaxanthin, are also common. Many other forms of carotenoids exist that are only found in certain groups of chloroplasts.
Phycobilins
Phycobilins are a third group of pigments found in cyanobacteria, and glaucophyte, red algal, and cryptophyte chloroplasts. Phycobilins come in all colors, though phycoerytherin is one of the pigments that makes many red algae red. Phycobilins often organize into relatively large protein complexes about 40 nanometers across called phycobilisomes. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. Cryptophyte chloroplasts and some cyanobacteria don't have their phycobilin pigments organized into phycobilisomes, and keep them in their thylakoid space instead.
Specialized chloroplasts in plants
To fix carbon dioxide into sugar molecules in the process of photosynthesis, chloroplasts use an enzyme called RuBisCO. RuBisCO has trouble distinguishing between carbon dioxide and oxygen, so at high oxygen concentrations, RuBisCO starts accidentally adding oxygen to sugar precursors. This has the result of ATP energy being wasted and being released, all with no sugar being produced. This is a big problem, since O is produced by the initial light reactions of photosynthesis, causing issues down the line in the Calvin cycle which uses RuBisCO.
plants evolved a way to solve this—by spatially separating the light reactions and the Calvin cycle. The light reactions, which store light energy in ATP and NADPH, are done in the mesophyll cells of a leaf. The Calvin cycle, which uses the stored energy to make sugar using RuBisCO, is done in the bundle sheath cells, a layer of cells surrounding a vein in a leaf.
As a result, chloroplasts in mesophyll cells and bundle sheath cells are specialized for each stage of photosynthesis. In mesophyll cells, chloroplasts are specialized for the light reactions, so they lack RuBisCO, and have normal grana and thylakoids, which they use to make ATP and NADPH, as well as oxygen. They store in a four-carbon compound, which is why the process is called photosynthesis. The four-carbon compound is then transported to the bundle sheath chloroplasts, where it drops off and returns to the mesophyll. Bundle sheath chloroplasts do not carry out the light reactions, preventing oxygen from building up in them and disrupting RuBisCO activity. Because of this, they lack thylakoids organized into grana stacks—though bundle sheath chloroplasts still have free-floating thylakoids in the stroma where they still carry out cyclic electron flow, a light-driven method of synthesizing ATP to power the Calvin cycle without generating oxygen. They lack photosystem II, and only have photosystem I—the only protein complex needed for cyclic electron flow. Because the job of bundle sheath chloroplasts is to carry out the Calvin cycle and make sugar, they often contain large starch grains.
Both types of chloroplast contain large amounts of chloroplast peripheral reticulum, which they use to get more surface area to transport stuff in and out of them. Mesophyll chloroplasts have a little more peripheral reticulum than bundle sheath chloroplasts.
Function and chemistry
Guard cell chloroplasts
Unlike most epidermal cells, the guard cells of plant stomata contain relatively well-developed chloroplasts. However, exactly what they do is controversial.
Plant innate immunity
Plants lack specialized immune cells—all plant cells participate in the plant immune response. Chloroplasts, along with the nucleus, cell membrane, and endoplasmic reticulum, are key players in pathogen defense. Due to its role in a plant cell's immune response, pathogens frequently target the chloroplast.
Plants have two main immune responses—the hypersensitive response, in which infected cells seal themselves off and undergo programmed cell death, and systemic acquired resistance, where infected cells release signals warning the rest of the plant of a pathogen's presence.
Chloroplasts stimulate both responses by purposely damaging their photosynthetic system, producing reactive oxygen species. High levels of reactive oxygen species will cause the hypersensitive response. The reactive oxygen species also directly kill any pathogens within the cell. Lower levels of reactive oxygen species initiate systemic acquired resistance, triggering defense-molecule production in the rest of the plant.
In some plants, chloroplasts are known to move closer to the infection site and the nucleus during an infection.
Chloroplasts can serve as cellular sensors. After detecting stress in a cell, which might be due to a pathogen, chloroplasts begin producing molecules like salicylic acid, jasmonic acid, nitric oxide and reactive oxygen species which can serve as defense-signals. As cellular signals, reactive oxygen species are unstable molecules, so they probably don't leave the chloroplast, but instead pass on their signal to an unknown second messenger molecule. All these molecules initiate retrograde signaling—signals from the chloroplast that regulate gene expression in the nucleus.
In addition to defense signaling, chloroplasts, with the help of the peroxisomes, help synthesize an important defense molecule, jasmonate. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate.
Photosynthesis
One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Water (H2O) and carbon dioxide (CO2) are used in photosynthesis, and sugar and oxygen (O2) are made, using light energy. Photosynthesis is divided into two stages—the light reactions, where water is split to produce oxygen, and the dark reactions, or Calvin cycle, which builds sugar molecules from carbon dioxide. The two phases are linked by the energy carriers adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADP+).
Light reactions
The light reactions take place on the thylakoid membranes. They take light energy and store it in NADPH, a form of NADP+, and ATP to fuel the dark reactions.
Energy carriers
ATP is the phosphorylated version of adenosine diphosphate (ADP), which stores energy in a cell and powers most cellular activities. ATP is the energized form, while ADP is the (partially) depleted form. NADP+ is an electron carrier which ferries high energy electrons. In the light reactions, it gets reduced, meaning it picks up electrons, becoming NADPH.
Photophosphorylation
Like mitochondria, chloroplasts use the potential energy stored in an H+, or hydrogen ion, gradient to generate ATP energy. The two photosystems capture light energy to energize electrons taken from water, and release them down an electron transport chain. The molecules between the photosystems harness the electrons' energy to pump hydrogen ions into the thylakoid space, creating a concentration gradient, with more hydrogen ions (up to a thousand times as many) inside the thylakoid system than in the stroma. The hydrogen ions in the thylakoid space then diffuse back down their concentration gradient, flowing back out into the stroma through ATP synthase. ATP synthase uses the energy from the flowing hydrogen ions to phosphorylate adenosine diphosphate into adenosine triphosphate, or ATP. Because chloroplast ATP synthase projects out into the stroma, the ATP is synthesized there, in position to be used in the dark reactions.
NADP+ reduction
Electrons are often removed from the electron transport chains to charge NADP+ with electrons, reducing it to NADPH. Like ATP synthase, ferredoxin-NADP+ reductase, the enzyme that reduces NADP+, releases the NADPH it makes into the stroma, right where it is needed for the dark reactions.
Because NADP+ reduction removes electrons from the electron transport chains, they must be replaced—the job of photosystem II, which splits water molecules (H2O) to obtain the electrons from its hydrogen atoms.
Cyclic photophosphorylation
While photosystem II photolyzes water to obtain and energize new electrons, photosystem I simply reenergizes depleted electrons at the end of an electron transport chain. Normally, the reenergized electrons are taken by NADP+, though sometimes they can flow back down more H+-pumping electron transport chains to transport more hydrogen ions into the thylakoid space to generate more ATP. This is termed cyclic photophosphorylation because the electrons are recycled. Cyclic photophosphorylation is common in plants, which need more ATP than NADPH.
Dark reactions
The Calvin cycle, also known as the dark reactions, is a series of biochemical reactions that fixes CO2 into G3P sugar molecules and uses the energy and electrons from the ATP and NADPH made in the light reactions. The Calvin cycle takes place in the stroma of the chloroplast.
While named "the dark reactions", in most plants, they take place in the light, since the dark reactions are dependent on the products of the light reactions.
Carbon fixation and G3P synthesis
The Calvin cycle starts by using the enzyme RuBisCO to fix CO2 into five-carbon Ribulose bisphosphate (RuBP) molecules. The result is unstable six-carbon molecules that immediately break down into three-carbon molecules called 3-phosphoglyceric acid, or 3-PGA.
The ATP and NADPH made in the light reactions is used to convert the 3-PGA into glyceraldehyde-3-phosphate, or G3P sugar molecules. Most of the G3P molecules are recycled back into RuBP using energy from more ATP, but one out of every six produced leaves the cycle—the end product of the dark reactions.
Sugars and starches
Glyceraldehyde-3-phosphate can double up to form larger sugar molecules like glucose and fructose. These molecules are processed, and from them, the still larger sucrose, a disaccharide commonly known as table sugar, is made, though this process takes place outside of the chloroplast, in the cytoplasm.
Alternatively, glucose monomers in the chloroplast can be linked together to make starch, which accumulates into the starch grains found in the chloroplast.
Under conditions such as high atmospheric CO2 concentrations, these starch grains may grow very large, distorting the grana and thylakoids. The starch granules displace the thylakoids, but leave them intact.
Waterlogged roots can also cause starch buildup in the chloroplasts, possibly due to less sucrose being exported out of the chloroplast (or more accurately, the plant cell). This depletes a plant's free phosphate supply, which indirectly stimulates chloroplast starch synthesis.
While linked to low photosynthesis rates, the starch grains themselves may not necessarily interfere significantly with the efficiency of photosynthesis, and might simply be a side effect of another photosynthesis-depressing factor.
Photorespiration
Photorespiration can occur when the oxygen concentration is too high. RuBisCO cannot distinguish between oxygen and carbon dioxide very well, so it can accidentally add O2 instead of CO2 to RuBP. This process reduces the efficiency of photosynthesis—it consumes ATP and oxygen, releases CO2, and produces no sugar. It can waste up to half the carbon fixed by the Calvin cycle. Several mechanisms have evolved in different lineages that raise the carbon dioxide concentration relative to oxygen within the chloroplast, increasing the efficiency of photosynthesis. These mechanisms are called carbon dioxide concentrating mechanisms, or CCMs. These include Crassulacean acid metabolism, carbon fixation, and pyrenoids. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism.
pH
Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8.
The optimal stroma pH for the Calvin cycle is 8.1, with the reaction nearly stopping when the pH falls below 7.3.
CO2 in water can form carbonic acid, which can disturb the pH of isolated chloroplasts, interfering with photosynthesis, even though CO2 is used in photosynthesis. However, chloroplasts in living plant cells are not affected by this as much.
Chloroplasts can pump K+ and H+ ions in and out of themselves using a poorly understood light-driven transport system.
In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit.
Amino acid synthesis
Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Cysteine is made in the chloroplast (the proplastid too) but it is also synthesized in the cytosol and mitochondria, probably because it has trouble crossing membranes to get to where it is needed. The chloroplast is known to make the precursors to methionine but it is unclear whether the organelle carries out the last leg of the pathway or if it happens in the cytosol.
Other nitrogen compounds
Chloroplasts make all of a cell's purines and pyrimidines—the nitrogenous bases found in DNA and RNA. They also convert nitrite (NO2−) into ammonia (NH3) which supplies the plant with nitrogen to make its amino acids and nucleotides.
Other chemical products
The plastid is the site of diverse and complex lipid synthesis in plants. The carbon used to form the majority of the lipid is from acetyl-CoA, which is the decarboxylation product of pyruvate. Pyruvate may enter the plastid from the cytosol by passive diffusion through the membrane after production in glycolysis. Pyruvate is also made in the plastid from phosphoenolpyruvate, a metabolite made in the cytosol from pyruvate or PGA. Acetate in the cytosol is unavailable for lipid biosynthesis in the plastid. The typical length of fatty acids produced in the plastid are 16 or 18 carbons, with 0-3 cis double bonds.
The biosynthesis of fatty acids from acetyl-CoA primarily requires two enzymes. Acetyl-CoA carboxylase creates malonyl-CoA, used in both the first step and the extension steps of synthesis. Fatty acid synthase (FAS) is a large complex of enzymes and cofactors including acyl carrier protein (ACP) which holds the acyl chain as it is synthesized. The initiation of synthesis begins with the condensation of malonyl-ACP with acetyl-CoA to produce ketobutyryl-ACP. 2 reductions involving the use of NADPH and one dehydration creates butyryl-ACP. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration.
Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites.
Location
Distribution in a plant
Not all cells in a multicellular plant contain chloroplasts. All green parts of a plant contain chloroplasts as the color comes from the chlorophyll. The plant cells which contain chloroplasts are usually parenchyma cells, though chloroplasts can also be found in collenchyma tissue. A plant cell which contains chloroplasts is known as a chlorenchyma cell. A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts.
In some plants such as cacti, chloroplasts are found in the stems, though in most plants, chloroplasts are concentrated in the leaves. One square millimeter of leaf tissue can contain half a million chloroplasts. Within a leaf, chloroplasts are mainly found in the mesophyll layers of a leaf, and the guard cells of stomata. Palisade mesophyll cells can contain 30–70 chloroplasts per cell, while stomatal guard cells contain only around 8–15 per cell, as well as much less chlorophyll. Chloroplasts can also be found in the bundle sheath cells of a leaf, especially in C plants, which carry out the Calvin cycle in their bundle sheath cells. They are often absent from the epidermis of a leaf.
Cellular location
Chloroplast movement
The chloroplasts of plant and algal cells can orient themselves to best suit the available light. In low-light conditions, they will spread out in a sheet—maximizing the surface area to absorb light. Under intense light, they will seek shelter by aligning in vertical columns along the plant cell's cell wall or turning sideways so that light strikes them edge-on. This reduces exposure and protects them from photooxidative damage. This ability to distribute chloroplasts so that they can take shelter behind each other or spread out may be the reason why land plants evolved to have many small chloroplasts instead of a few big ones.
Chloroplast movement is considered one of the most closely regulated stimulus-response systems that can be found in plants. Mitochondria have also been observed to follow chloroplasts as they move.
In higher plants, chloroplast movement is run by phototropins, blue light photoreceptors also responsible for plant phototropism. In some algae, mosses, ferns, and flowering plants, chloroplast movement is influenced by red light in addition to blue light, though very long red wavelengths inhibit movement rather than speeding it up. Blue light generally causes chloroplasts to seek shelter, while red light draws them out to maximize light absorption.
Studies of Vallisneria gigantea, an aquatic flowering plant, have shown that chloroplasts can get moving within five minutes of light exposure, though they don't initially show any net directionality. They may move along microfilament tracks, and the fact that the microfilament mesh changes shape to form a honeycomb structure surrounding the chloroplasts after they have moved suggests that microfilaments may help to anchor chloroplasts in place.
Differentiation, replication, and inheritance
Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. There are many other types of plastids, which carry out various functions. All chloroplasts in a plant are descended from undifferentiated proplastids found in the zygote, or fertilized egg. Proplastids are commonly found in an adult plant's apical meristems. Chloroplasts do not normally develop from proplastids in root tip meristems—instead, the formation of starch-storing amyloplasts is more common.
In shoots, proplastids from shoot apical meristems can gradually develop into chloroplasts in photosynthetic leaf tissues as the leaf matures, if exposed to the required light. This process involves invaginations of the inner plastid membrane, forming sheets of membrane that project into the internal stroma. These membrane sheets then fold to form thylakoids and grana.
If angiosperm shoots are not exposed to the required light for chloroplast formation, proplastids may develop into an etioplast stage before becoming chloroplasts. An etioplast is a plastid that lacks chlorophyll, and has inner membrane invaginations that form a lattice of tubes in their stroma, called a prolamellar body. While etioplasts lack chlorophyll, they have a yellow chlorophyll precursor stocked. Within a few minutes of light exposure, the prolamellar body begins to reorganize into stacks of thylakoids, and chlorophyll starts to be produced. This process, where the etioplast becomes a chloroplast, takes several hours. Gymnosperms do not require light to form chloroplasts.
Light, however, does not guarantee that a proplastid will develop into a chloroplast. Whether a proplastid develops into a chloroplast some other kind of plastid is mostly controlled by the nucleus and is largely influenced by the kind of cell it resides in.
Plastid interconversion
Plastid differentiation is not permanent, in fact many interconversions are possible. Chloroplasts may be converted to chromoplasts, which are pigment-filled plastids responsible for the bright colors seen in flowers and ripe fruit. Starch storing amyloplasts can also be converted to chromoplasts, and it is possible for proplastids to develop straight into chromoplasts. Chromoplasts and amyloplasts can also become chloroplasts, like what happens when a carrot or a potato is illuminated. If a plant is injured, or something else causes a plant cell to revert to a meristematic state, chloroplasts and other plastids can turn back into proplastids. Chloroplast, amyloplast, chromoplast, proplastid are not absolute; state—intermediate forms are common.
Division
Most chloroplasts in a photosynthetic cell do not develop directly from proplastids or etioplasts. In fact, a typical shoot meristematic plant cell contains only 7–20 proplastids. These proplastids differentiate into chloroplasts, which divide to create the 30–70 chloroplasts found in a mature photosynthetic plant cell. If the cell divides, chloroplast division provides the additional chloroplasts to partition between the two daughter cells.
In single-celled algae, chloroplast division is the only way new chloroplasts are formed. There is no proplastid differentiation—when an algal cell divides, its chloroplast divides along with it, and each daughter cell receives a mature chloroplast.
Almost all chloroplasts in a cell divide, rather than a small group of rapidly dividing chloroplasts. Chloroplasts have no definite S-phase—their DNA replication is not synchronized or limited to that of their host cells.
Much of what we know about chloroplast division comes from studying organisms like Arabidopsis and the red alga Cyanidioschyzon merolæ.
The division process starts when the proteins FtsZ1 and FtsZ2 assemble into filaments, and with the help of a protein ARC6, form a structure called a Z-ring within the chloroplast's stroma. The Min system manages the placement of the Z-ring, ensuring that the chloroplast is cleaved more or less evenly. The protein MinD prevents FtsZ from linking up and forming filaments. Another protein ARC3 may also be involved, but it is not very well understood. These proteins are active at the poles of the chloroplast, preventing Z-ring formation there, but near the center of the chloroplast, MinE inhibits them, allowing the Z-ring to form.
Next, the two plastid-dividing rings, or PD rings form. The inner plastid-dividing ring is located in the inner side of the chloroplast's inner membrane, and is formed first. The outer plastid-dividing ring is found wrapped around the outer chloroplast membrane. It consists of filaments about 5 nanometers across, arranged in rows 6.4 nanometers apart, and shrinks to squeeze the chloroplast. This is when chloroplast constriction begins. In a few species like Cyanidioschyzon merolæ, chloroplasts have a third plastid-dividing ring located in the chloroplast's intermembrane space.
Late into the constriction phase, dynamin proteins assemble around the outer plastid-dividing ring, helping provide force to squeeze the chloroplast. Meanwhile, the Z-ring and the inner plastid-dividing ring break down. During this stage, the many chloroplast DNA plasmids floating around in the stroma are partitioned and distributed to the two forming daughter chloroplasts.
Later, the dynamins migrate under the outer plastid dividing ring, into direct contact with the chloroplast's outer membrane, to cleave the chloroplast in two daughter chloroplasts.
A remnant of the outer plastid dividing ring remains floating between the two daughter chloroplasts, and a remnant of the dynamin ring remains attached to one of the daughter chloroplasts.
Of the five or six rings involved in chloroplast division, only the outer plastid-dividing ring is present for the entire constriction and division phase—while the Z-ring forms first, constriction does not begin until the outer plastid-dividing ring forms.
Regulation
In species of algae that contain a single chloroplast, regulation of chloroplast division is extremely important to ensure that each daughter cell receives a chloroplast—chloroplasts can't be made from scratch. In organisms like plants, whose cells contain multiple chloroplasts, coordination is looser and less important. It is likely that chloroplast and cell division are somewhat synchronized, though the mechanisms for it are mostly unknown.
Light has been shown to be a requirement for chloroplast division. Chloroplasts can grow and progress through some of the constriction stages under poor quality green light, but are slow to complete division—they require exposure to bright white light to complete division. Spinach leaves grown under green light have been observed to contain many large dumbbell-shaped chloroplasts. Exposure to white light can stimulate these chloroplasts to divide and reduce the population of dumbbell-shaped chloroplasts.
Chloroplast inheritance
Like mitochondria, chloroplasts are usually inherited from a single parent. Biparental chloroplast inheritance—where plastid genes are inherited from both parent plants—occurs in very low levels in some flowering plants.
Many mechanisms prevent biparental chloroplast DNA inheritance, including selective destruction of chloroplasts or their genes within the gamete or zygote, and chloroplasts from one parent being excluded from the embryo. Parental chloroplasts can be sorted so that only one type is present in each offspring.
Gymnosperms, such as pine trees, mostly pass on chloroplasts paternally, while flowering plants often inherit chloroplasts maternally. Flowering plants were once thought to only inherit chloroplasts maternally. However, there are now many documented cases of angiosperms inheriting chloroplasts paternally.
Angiosperms, which pass on chloroplasts maternally, have many ways to prevent paternal inheritance. Most of them produce sperm cells that do not contain any plastids. There are many other documented mechanisms that prevent paternal inheritance in these flowering plants, such as different rates of chloroplast replication within the embryo.
Among angiosperms, paternal chloroplast inheritance is observed more often in hybrids than in offspring from parents of the same species. This suggests that incompatible hybrid genes might interfere with the mechanisms that prevent paternal inheritance.
Transplastomic plants
Recently, chloroplasts have caught attention by developers of genetically modified crops. Since, in most flowering plants, chloroplasts are not inherited from the male parent, transgenes in these plastids cannot be disseminated by pollen. This makes plastid transformation a valuable tool for the creation and cultivation of genetically modified plants that are biologically contained, thus posing significantly lower environmental risks. This biological containment strategy is therefore suitable for establishing the coexistence of conventional and organic agriculture. While the reliability of this mechanism has not yet been studied for all relevant crop species, recent results in tobacco plants are promising, showing a failed containment rate of transplastomic plants at 3 in 1,000,000.
| Biology and health sciences | Plant cells | null |
6359 | https://en.wikipedia.org/wiki/Crux | Crux | Crux () is a constellation of the southern sky that is centred on four bright stars in a cross-shaped asterism commonly known as the Southern Cross. It lies on the southern end of the Milky Way's visible band. The name Crux is Latin for cross. Even though it is the smallest of all 88 modern constellations, Crux is among the most easily distinguished as its four main stars each have an apparent visual magnitude brighter than +2.8. It has attained a high level of cultural significance in many Southern Hemisphere states and nations.
Blue-white α Crucis (Acrux) is the most southerly member of the constellation and, at magnitude 0.8, the brightest. The three other stars of the cross appear clockwise and in order of lessening magnitude: β Crucis (Mimosa), γ Crucis (Gacrux), and δ Crucis (Imai). ε Crucis (Ginan) also lies within the cross asterism. Many of these brighter stars are members of the Scorpius–Centaurus association, a large but loose group of hot blue-white stars that appear to share common origins and motion across the southern Milky Way.
Crux contains four Cepheid variables, each visible to the naked eye under optimum conditions. Crux also contains the bright and colourful open cluster known as the Jewel Box (NGC 4755) on its eastern border. Nearby to the southeast is a large dark nebula spanning 7° by 5° known as the Coalsack Nebula, portions of which are mapped in the neighbouring constellations of Centaurus and Musca.
History
The bright stars in Crux were known to the Ancient Greeks, where Ptolemy regarded them as part of the constellation Centaurus. They were entirely visible as far north as Britain in the fourth millennium BC. However, the precession of the equinoxes gradually lowered the stars below the European horizon, and they were eventually forgotten by the inhabitants of northern latitudes. By 400 AD, the stars in the constellation now called Crux never rose above the horizon throughout most of Europe. Dante may have known about the constellation in the 14th century, as he describes an asterism of four bright stars in the southern sky in his Divine Comedy. His description, however, may be allegorical, and the similarity to the constellation a coincidence.
The 15th century Venetian navigator Alvise Cadamosto made note of what was probably the Southern Cross on exiting the Gambia River in 1455, calling it the carro dell'ostro ("southern chariot"). However, Cadamosto's accompanying diagram was inaccurate. Historians generally credit João Faras for being the first European to depict it correctly. Faras sketched and described the constellation (calling it "las guardas") in a letter written on the beaches of Brazil on 1 May 1500 to the Portuguese monarch.
Explorer Amerigo Vespucci seems to have observed not only the Southern Cross but also the neighboring Coalsack Nebula on his second voyage in 1501–1502.
Another early modern description clearly describing Crux as a separate constellation is attributed to Andrea Corsali, an Italian navigator who from 1515 to 1517 sailed to China and the East Indies in an expedition sponsored by King Manuel I. In 1516, Corsali wrote a letter to the monarch describing his observations of the southern sky, which included a rather crude map of the stars around the south celestial pole including the Southern Cross and the two Magellanic Clouds seen in an external orientation, as on a globe.
Emery Molyneux and Petrus Plancius have also been cited as the first uranographers (sky mappers) to distinguish Crux as a separate constellation; their representations date from 1592, the former depicting it on his celestial globe and the latter in one of the small celestial maps on his large wall map. Both authors, however, depended on unreliable sources and placed Crux in the wrong position. Crux was first shown in its correct position on the celestial globes of Petrus Plancius and Jodocus Hondius in 1598 and 1600. Its stars were first catalogued separately from Centaurus by Frederick de Houtman in 1603. The constellation was later adopted by Jakob Bartsch in 1624 and Augustin Royer in 1679. Royer is sometimes wrongly cited as initially distinguishing Crux.
Characteristics
Crux is bordered by the constellations Centaurus (which surrounds it on three sides) on the east, north and west, and Musca to the south. Covering 68 square degrees and 0.165% of the night sky, it is the smallest of the 88 constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Cru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −55.68° and −64.70°. Its totality figures at least part of the year south of the 25th parallel north.
In tropical regions Crux can be seen in the sky from April to June. Crux is exactly opposite to Cassiopeia on the celestial sphere, and therefore it cannot appear in the sky with the latter at the same time. In this era, south of Cape Town, Adelaide, and Buenos Aires (the 34th parallel south), Crux is circumpolar and thus always appears in the sky.
Crux is sometimes confused with the nearby False Cross asterism by stargazers. The False Cross consists of stars in Carina and Vela, is larger and dimmer, does not have a fifth star, and lacks the two prominent nearby "Pointer Stars". Between the two is the even larger and dimmer Diamond Cross.
Visibility
Crux is easily visible from the southern hemisphere, south of 35th parallel at practically any time of year as circumpolar. It is also visible near the horizon from tropical latitudes of the northern hemisphere for a few hours every night during the northern winter and spring. For instance, it is visible from Cancun or any other place at latitude 25° N or less at around 10 pm at the end of April. There are 5 main stars.
Due to precession, Crux will move closer to the South Pole in the next millennia, up to 67 degrees south declination for the middle of the constellation. However, by the year 14,000, Crux will be visible for most parts of Europe and the continental United States. Its visibility will extend to North Europe by the year 18,000 when it will be less than 30 degrees south declination.
Use in navigation
In the Southern Hemisphere, the Southern Cross is frequently used for navigation in much the same way that Polaris is used in the Northern Hemisphere. Projecting a line from γ to α Crucis (the foot of the crucifix) approximately times beyond gives a point close to the Southern Celestial Pole which is also, coincidentally, where intersects a perpendicular line taken southwards from the east–west axis of Alpha Centauri to Beta Centauri, which are stars at an alike declination to Crux and of a similar width as the cross, but higher magnitude. Argentine gauchos are documented as using Crux for night orientation in the Pampas and Patagonia.
Alpha and Beta Centauri are of similar declinations (thus distance from the pole) and are often referred as the "Southern Pointers" or just "The Pointers", allowing people to easily identify the Southern Cross, the constellation of Crux. Very few bright stars lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately south of Crux.
Bright stars
Down to apparent magnitude +2.5 are 92 stars that shine the brightest as viewed from the Earth. Three of these stars are in Crux making it the most densely populated as to those stars (this being 3.26% of these 92 stars, and in turn being 19.2 times more than the expected 0.17% that would result on a homogenous distribution of all bright stars and a randomised drawing of all 88 constellations, given its area, 0.17% of the sky).
Features
Stars
Within the constellation's borders, there are 49 stars brighter than or equal to apparent magnitude 6.5. The four main stars that form the asterism are Alpha, Beta, Gamma, and Delta Crucis.
α Crucis or Acrux is a triple star 321 light-years from Earth. A rich blue in colour, with a visual magnitude 0.8 to the unaided eye, it has two close components of a similar magnitude, 1.3 and 1.8 respectively, plus another much wider component of the 5th magnitude. The two close components are resolved in a small amateur telescope and the wide component is readily visible in a pair of binoculars.
β Crucis or Mimosa is a blue-hued giant star of magnitude 1.3, and lies 353 light-years from Earth. It is a Beta Cephei-type variable star with a variation of less than 0.1 magnitudes.
γ Crucis or Gacrux is an optical double star. The primary is a red-hued giant star of magnitude 1.6, 88 light-years from Earth, and is one of the closest red giants to Earth. Its secondary component is magnitude 6.5, 264 light-years from Earth.
δ Crucis (Imai) is a magnitude 2.8 blue-white hued star about 345 light-years from Earth. Like Mimosa it is a Beta Cepheid variable.
There is also a fifth star, that is often included with the Southern Cross.
ε Crucis (Ginan) is an orange-hued giant star of magnitude 3.6, 228 light-years from Earth.
There are several other naked-eye stars within the borders of Crux, especially:
Iota Crucis is a visual double star 125 light-years from Earth. The primary is an orange-hued giant of magnitude 4.6 and the secondary at magnitude 9.5.
Mu Crucis or Mu1,2 Crucis is a wide double star where the components are about 370 light-years from Earth. Equally blue-white in colour, the components are magnitude 4.0 and 5.1 respectively, and are easily divisible in small amateur telescopes or large binoculars.
Scorpius–Centaurus association
Unusually, a total of 15 of the 23 brightest stars in Crux are spectrally blue-white B-type stars. Among the five main bright stars, Delta, and probably Alpha and Beta, are likely co-moving B-type members of the Scorpius–Centaurus association, the nearest OB association to the Sun. They are among the highest-mass stellar members of the Lower Centaurus–Crux subgroup of the association, with ages of roughly 10 to 20 million years. Other members include the blue-white stars Zeta, Lambda and both the components of the visual double star, Mu.
Variable stars
Crux contains many variable stars. It boasts four Cepheid variables that may all reach naked eye visibility.
BG Crucis ranges from magnitude 5.34 to 5.58 over 3.3428 days,
T Crucis ranges from 6.32 to 6.83 over 6.73331 days,
S Crucis ranges from 6.22 to 6.92 over 4.68997 days,
R Crucis ranges from 6.4 to 7.23 over 5.82575 days.
Other well studied variable stars includes:
Lambda Crucis and Theta2 Crucis, that are both Beta Cepheid type variable stars.
BH Crucis, also known as Welch's Red Variable, is a Mira variable that ranges from magnitude 6.6 to 9.8 over 530 days. Discovered in October 1969, it has become redder and brighter (mean magnitude changing from 8.047 to 7.762) and its period lengthened by 25% in the first thirty years since its discovery.
Host star exoplanets in Crux
The star HD 106906 has been found to have a planet—HD 106906 b—that has one of the widest orbits of any currently known planetary-mass companions.
Objects beyond the Local Arm
Crux is backlit by the multitude of stars of the Scutum-Crux Arm (more commonly called the Scutum-Centaurus Arm) of the Milky Way. This is the main inner arm in the local radial quarter of the galaxy. Part-obscuring this is:
The Coalsack Nebula lies partially within Crux and partly in the neighboring constellations of Musca and Centaurus. It is the most prominent dark nebula in the skies, and is easily visible to the naked eye as a prominent dark patch in the southern Milky Way. It can be found 6.5° southeast from the centre of Crux or 3° east from α Crucis. Its large area covers about 7° by 5°, and is away from Earth.
A key feature of the Scutum-Crux Arm is:
The Jewel Box, κ Crucis Cluster or NGC 4755, is a small but bright open cluster that appears as a fuzzy star to the naked eye and is very close to the easternmost boundary of Crux: about 1° southeast of Beta Crucis. The combined or total magnitude is 4.2 and it lies at a distance of from Earth. The cluster was given its name by John Herschel, based on the range of colours visible throughout the star cluster in his telescope. About seven million years old, it is one of the youngest open clusters in the Milky Way, and it appears to have the shape of a letter 'A'. The Jewel Box Cluster is classified as Shapley class 'g' and Trumpler class 'I 3 r -' cluster; it is a very rich, centrally-concentrated cluster detached from the surrounding star field. It has more than 100 stars that range significantly in brightness. The brightest cluster stars are mostly blue supergiants, though the cluster contains at least one red supergiant. Kappa Crucis is a true member of the cluster that bears its name, and is one of the brighter stars at magnitude 5.9.
Cultural significance
The most prominent feature of Crux is the distinctive asterism known as the Southern Cross. It has great significance in the cultures of the southern hemisphere, particularly of Australia, Brazil, Chile and New Zealand.
Flags and symbols
Several southern countries and organisations have traditionally used Crux as a national or distinctive symbol. The four or five brightest stars of Crux appear, heraldically standardised in various ways, on the flags of Australia, Brazil, New Zealand, Papua New Guinea and Samoa. They also appear on the flags of the Australian state of Victoria, the Australian Capital Territory, the Northern Territory, as well as the flag of Magallanes Region of Chile, the flag of Londrina (Brazil) and several Argentine provincial flags and emblems (for example, Tierra del Fuego and Santa Cruz). The flag of the Mercosur trading zone displays the four brightest stars. Crux also appears on the Brazilian coat of arms and, , on the cover of Brazilian passports.
Five stars appear in the logo of the Brazilian football team Cruzeiro Esporte Clube and in the insignia of the Order of the Southern Cross, and the cross has featured as name of the Brazilian currency (the cruzeiro from 1942 to 1986 and again from 1990 to 1994). All coins of the (1998) series of the Brazilian real display the constellation.
Songs and literature reference the Southern Cross, including the Argentine epic poem Martín Fierro. The Argentinian singer Charly García says that he is "from the Southern Cross" in the song "No voy en tren".
The Cross gets a mention in the lyrics of the Brazilian National Anthem (1909): "A imagem do Cruzeiro resplandece" ("the image of the Cross shines").
The Southern Cross is mentioned in the Australian National Anthem, "Beneath our radiant Southern Cross we'll toil with hearts and hands"
The Southern Cross features in the coat of arms of William Birdwood, 1st Baron Birdwood, the British officer who commanded the Australian and New Zealand Army Corps during the Gallipoli Campaign of the First World War.
The Southern Cross is also mentioned in the Samoan
National Anthem.
"Vaai 'i na fetu o lo'u a agiagia ai: Le faailoga lea o Iesu, na maliu ai mo Samoa." ("Look at those stars that are waving on it: This is the symbol of Jesus, who died on it for Samoa.")
The 1952-53 NBC Television Series Victory At Sea contained a musical number entitled "Beneath the Southern Cross".
"Southern Cross" is a single released by Crosby, Stills and Nash in 1981. It reached #18 on Billboard Hot 100 in late 1982.
"The Sign of the Southern Cross" is a song released by Black Sabbath in 1981. The song was released on the album "Mob Rules".
The Order of the Southern Cross is a Brazilian order of chivalry awarded to "those who have rendered significant service to the Brazilian nation".
In "O Sweet Saint Martin's Land", the lyrics mention the Southern Cross: Thy Southern Cross the night.
A stylized version of Crux appears on the Australian Eureka Flag. The constellation was also used on the dark blue, shield-like patch worn by personnel of the U.S. Army's Americal Division, which was organized in the Southern Hemisphere, on the island of New Caledonia, and also on the blue diamond of the U.S. 1st Marine Division, which fought on the Southern Hemisphere islands of Guadalcanal and New Britain.
The Petersflagge flag of the German East Africa Company of 1885–1920, which included a constellation of five white five-pointed Crux "stars" on a red ground, later served as the model for symbolism associated with generic German colonial-oriented organisations: the Reichskolonialbund of 1936–1943 and the (1956/1983 to the present).
Southern Cross station is a major rail terminal in Melbourne, Australia.
The Personal Ordinariate of Our Lady of the Southern Cross is a personal ordinariate of the Roman Catholic Church primarily within the territory of the Australian Catholic Bishops Conference for groups of Anglicans who desire full communion with the Catholic Church in Australia and Asia.
The Knights of the Southern Cross (KSC) is a Catholic fraternal order throughout Australia.
Various cultures
In India, there is a story related to the creation of Trishanku Swarga (त्रिशंकु), meaning Cross (Crux), created by Sage Vishwamitra.
In Chinese, (), meaning Cross, refers to an asterism consisting of γ Crucis, α Crucis, β Crucis and δ Crucis.
In Australian Aboriginal astronomy, Crux and the Coalsack mark the head of the 'Emu in the Sky' (which is seen in the dark spaces rather than in the patterns of stars) in several Aboriginal cultures, while Crux itself is said to be a possum sitting in a tree (Boorong people of the Wimmera region of northwestern Victoria), a representation of the sky deity Mirrabooka (Quandamooka people of Stradbroke Island), a stingray (Yolngu people of Arnhem Land), or an eagle (Kaurna people of the Adelaide Plains). Two Pacific constellations also included Gamma Centauri. Torres Strait Islanders in modern-day Australia saw Gamma Centauri as the handle and the four stars as the left hand of Tagai, and the stars of Musca as the trident of the fishing spear he is holding. In Aranda traditions of central Australia, the four Cross stars are the talon of an eagle and Gamma Centauri as its leg.
Various peoples in the East Indies and Brazil viewed the four main stars as the body of a ray. In both Indonesia and Malaysia, it is known as Bintang Pari and Buruj Pari, respectively ("ray stars"). This aquatic theme is also shared by an archaic name of the constellation in Vietnam, where it was once known as sao Cá Liệt (the ponyfish star).
Among Filipino people, the southern cross have various names pertaining to tops, including kasing (Visayan languages), paglong (Bikol), and pasil (Tagalog). It is also called butiti (puffer fish) in Waray.
The Javanese people of Indonesia called this constellation Gubug pèncèng ("raking hut") or lumbung ("the granary"), because the shape of the constellation was like that of a raking hut.
The Southern Cross (α, β, γ and δ Crucis) together with μ Crucis is one of the asterisms used by Bugis sailors for navigation, called bintoéng bola képpang, meaning "incomplete house star"
The Māori name for the Southern Cross is Māhutonga and it is thought of as the anchor (Te Punga) of Tama-rereti's waka (the Milky Way), while the Pointers are its rope. In Tonga it is known as Toloa ("duck"); it is depicted as a duck flying south, with one of his wings (δ Crucis) wounded because Ongo tangata ("two men", α and β Centauri) threw a stone at it. The Coalsack is known as Humu (the "triggerfish"), because of its shape. In Samoa the constellation is called Sumu ("triggerfish") because of its rhomboid shape, while α and β Centauri are called Luatagata (Two Men), just as they are in Tonga. The peoples of the Solomon Islands saw several figures in the Southern Cross. These included a knee protector and a net used to catch Palolo worms. Neighboring peoples in the Marshall Islands saw these stars as a fish. Peninsular Malays also see the likeness of a fish in the Crux, particularly the Scomberomorus or its local name Tohok.
In Mapudungun, the language of Patagonian Mapuches, the name of the Southern Cross is Melipal, which means "four stars". In Quechua, the language of the Inca civilization, Crux is known as "Chakana", which means literally "stair" (chaka, bridge, link; hanan, high, above), but carries a deep symbolism within Quechua mysticism. Alpha and Beta Crucis make up one foot of the Great Rhea, a constellation encompassing Centaurus and Circinus along with the two bright stars. The Great Rhea was a constellation of the Bororo of Brazil. The Mocoví people of Argentina also saw a rhea including the stars of Crux. Their rhea is attacked by two dogs, represented by bright stars in Centaurus and Circinus. The dogs' heads are marked by Alpha and Beta Centauri. The rhea's body is marked by the four main stars of Crux, while its head is Gamma Centauri and its feet are the bright stars of Musca. The Bakairi people of Brazil had a sprawling constellation representing a bird snare. It included the bright stars of Crux, the southern part of Centaurus, Circinus, at least one star in Lupus, the bright stars of Musca, Beta and the optical double star Delta1,2 Chamaeleontis: and some of the stars of Volans, and Mensa. The Kalapalo people of Mato Grosso state in Brazil saw the stars of Crux as Aganagi angry bees having emerged from the Coalsack, which they saw as the beehive.
Among Tuaregs, the four most visible stars of Crux are considered iggaren, i.e. four Maerua crassifolia trees. The Tswana people of Botswana saw the constellation as Dithutlwa, two giraffes – Alpha and Beta Crucis forming a male, and Gamma and Delta forming the female.
| Physical sciences | Constellations | null |
6362 | https://en.wikipedia.org/wiki/Cetus | Cetus | Cetus () is a constellation, sometimes called 'the whale' in English. The Cetus was a sea monster in Greek mythology which both Perseus and Heracles needed to slay. Cetus is in the region of the sky that contains other water-related constellations: Aquarius, Pisces and Eridanus.
Features
Ecliptic
Cetus is not among the 12 true zodiac constellations in the J2000 epoch, nor classical 12-part zodiac. The ecliptic passes less than 0.25° from one of its corners. Thus the moon and planets will enter Cetus (occulting any stars as a foreground object) in 50% of their successive orbits briefly and the southern part of the sun appears in Cetus for about one day each year. Many asteroids in belts have longer phases occulting the north-western part of Cetus, those with a slightly greater inclination to the ecliptic than the moon and planets.
As seen from Mars, the ecliptic (apparent plane of the sun and also the average plane of the planets which is almost the same) passes into it.
Stars
Mira ("wonderful", named by Bayer: Omicron Ceti, a star of the neck of the asterism) was the first variable star to be discovered and the prototype of its class, Mira variables. Over a period of 332 days, it reaches a maximum apparent magnitude of 3 - visible to the naked eye - and dips to a minimum magnitude of 10, invisible to the unaided eye. Its seeming appearance and disappearance gave it its name. Mira pulsates with a minimum size of 400 solar diameters and a maximum size of 500 solar diameters. 420 light-years from Earth, it was discovered by David Fabricius in 1596.
α Ceti, traditionally called Menkar ("the nose"), is a red-hued giant star of magnitude 2.5, 220 light-years from Earth. It is a wide double star; the secondary is 93 Ceti, a blue-white hued star of magnitude 5.6, 440 light-years away. β Ceti, also called Deneb Kaitos and Diphda is the brightest star in Cetus. It is an orange-hued giant star of magnitude 2.0, 96 light-years from Earth. The traditional name "Deneb Kaitos" means "the whale's tail". γ Ceti, Kaffaljidhma ("head of the whale") is a very close double star. The primary is a blue-hued star of magnitude 3.5, 82 light-years from Earth, and the secondary is an F-type star of magnitude 6.6. Tau Ceti is noted for being a near Sun-like star at a distance of 11.9 light-years. It is a yellow-hued main-sequence star of magnitude 3.5.
AA Ceti is a triple star system; the brightest member has a magnitude of 6.2. The primary and secondary are separated by 8.4 arcseconds at an angle of 304 degrees. The tertiary is not visible in telescopes. AA Ceti is an eclipsing variable star; the tertiary star passes in front of the primary and causes the system's apparent magnitude to decrease by 0.5 magnitudes. UV Ceti is an unusual binary variable star. At 8.7 light-years from Earth, the system consists of two red dwarfs. Both of magnitude 13. One of the stars is a flare star, which are prone to sudden, random outbursts that last several minutes; these increase the pair's apparent brightness significantly - as high as magnitude 7.
Deep-sky objects
Cetus lies far from the galactic plane, so that many distant galaxies are visible, unobscured by dust from the Milky Way. Of these, the brightest is Messier 77 (NGC 1068), a 9th magnitude spiral galaxy near Delta Ceti. It appears face-on and has a clearly visible nucleus of magnitude 10. About 50 million light-years from Earth, M77 is also a Seyfert galaxy and thus a bright object in the radio spectrum. Recently, the galactic cluster JKCS 041 was confirmed to be the most distant cluster of galaxies yet discovered. The Pisces–Cetus Supercluster Complex is a galaxy filament that is one of the largest known structures in the observable Universe; it contains the Virgo supercluster which contains the Local Group of Milky Way and other galaxies.
The massive cD galaxy Holmberg 15A is also found in Cetus; as are the spiral galaxy NGC 1042, the elliptical galaxy NGC 1052 and the ultra-diffuse galaxy NGC 1052-DF2.
IC 1613 (Caldwell 51) is an irregular dwarf galaxy near the star 26 Ceti and is a member of the Local Group.
NGC 246 (Caldwell 56), also called the "Cetus Ring", is a planetary nebula with a magnitude of 8.0 at 1600 light-years from Earth. Among some amateur astronomers, NGC 246 has garnered the nickname "Pac-Man Nebula" because of the arrangement of its central stars and the surrounding star field.
The Wolf–Lundmark–Melotte (WLM) is a barred irregular galaxy discovered in 1909 by Max Wolf, located on the outer edges of the Local Group. The discovery of the nature of the galaxy was accredited to Knut Lundmark and Philibert Jacques Melotte in 1926.
UGC 1646, which is a spiral galaxy, also lies between the borders of the constellation. It is about 150 million light-years away from us. It can be seen near TYC 43-234-1 star.
History and mythology
Cetus may have originally been associated with a whale, which would have had mythic status amongst Mesopotamian cultures. It is often now called the Whale, though it is most strongly associated with Cetus the sea-monster, who was slain by Perseus as he saved the princess Andromeda from Poseidon's wrath. It is in the middle of "The Sea" recognised by mythologists, a set of water-associated constellations, its other members being Eridanus, Pisces, Piscis Austrinus and Aquarius.
Cetus has been depicted in many ways throughout its history. In the 17th century, Cetus was depicted as a "dragon fish" by Johann Bayer. Both Willem Blaeu and Andreas Cellarius depicted Cetus as a whale-like creature in the same century. However, Cetus has also been variously depicted with animal heads attached to a piscine body.
In global astronomy
In Chinese astronomy, the stars of Cetus are found among two areas: the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ) and the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ).
The Tukano and Kobeua people of the Amazon used the stars of Cetus to create a jaguar, representing the god of hurricanes and other violent storms. Lambda, Mu, Xi, Nu, Gamma, and Alpha Ceti represented its head; Omicron, Zeta, and Chi Ceti represented its body; Eta Eri, Tau Cet, and Upsilon Cet marked its legs and feet; and Theta, Eta, and Beta Ceti delineated its tail.
In Hawaii, the constellation was called Na Kuhi, and Mira (Omicron Ceti) may have been called Kane.
Namesakes
USS Cetus (AK-77) was a United States Navy Crater class cargo ship named after the constellation.
| Physical sciences | Other | Astronomy |
6363 | https://en.wikipedia.org/wiki/Carina%20%28constellation%29 | Carina (constellation) | Carina ( ) is a constellation in the southern sky. Its name is Latin for the keel of a ship, and it was the southern foundation of the larger constellation of Argo Navis (the ship Argo) until it was divided into three pieces, the other two being Puppis (the poop deck), and Vela (the sails of the ship).
History and mythology
Carina was once a part of Argo Navis, the great ship of the mythical Jason and the Argonauts who searched for the Golden Fleece. The constellation of Argo was introduced in ancient Greece. However, due to the massive size of Argo Navis and the sheer number of stars that required separate designation, Nicolas-Louis de Lacaille divided Argo into three sections in 1763, including Carina (the hull or keel). In the 19th century, these three became established as separate constellations, and were formally included in the list of 88 modern IAU constellations in 1930. Lacaille kept a single set of Greek letters for the whole of Argo, and separate sets of Latin letter designations for each of the three sections. Therefore, Carina has the α, β and ε, Vela has γ and δ, Puppis has ζ, and so on.
Notable features
Stars
Carina contains Canopus, a white-hued supergiant that is the second-brightest star in the night sky at magnitude −0.72. Alpha Carinae, as Canopus is formally designated, is 313 light-years from Earth. Its traditional name comes from the mythological Canopus, who was a navigator for Menelaus, king of Sparta.
There are several other stars above magnitude 3 in Carina. Beta Carinae, traditionally called Miaplacidus, is a blue-white-hued star of magnitude 1.7, 111 light-years from Earth. Epsilon Carinae is an orange-hued giant star similarly bright to Miaplacidus at magnitude 1.9; it is 630 light-years from Earth. Another fairly bright star is the blue-white-hued Theta Carinae; it is a magnitude 2.7 star 440 light-years from Earth. Theta Carinae is also the most prominent member of the cluster IC 2602. Iota Carinae is a white-hued supergiant star of magnitude 2.2, 690 light-years from Earth.
Eta Carinae is the most prominent variable star in Carina, with a mass of approximately 100 solar masses and 4 million times as bright as the Sun. It was first discovered to be unusual in 1677, when its magnitude suddenly rose to 4, attracting the attention of Edmond Halley. Eta Carinae is inside NGC 3372, commonly called the Carina Nebula. It had a long outburst in 1827, when it brightened to magnitude 1, only fading to magnitude 1.5 in 1828. Its most prominent outburst made Eta Carinae the equal of Sirius; it brightened to magnitude −1.5 in 1843. In the decades following 1843 it appeared relatively placid, having a magnitude between 6.5 and 7.9. However, in 1998, it brightened again, though only to magnitude 5.0, a far less drastic outburst. Eta Carinae is a binary star, with a companion that has a period of 5.5 years; the two stars are surrounded by the Homunculus Nebula, which is composed of gas that was ejected in 1843.
There are several less prominent variable stars in Carina. l Carinae is a Cepheid variable noted for its brightness; it is the brightest Cepheid that is variable to the unaided eye. It is a yellow-hued supergiant star with a minimum magnitude of 4.2 and a maximum magnitude of 3.3; it has a period of 35.5 days.
V382 Carinae is a yellow hypergiant, one of the rarest types of stars. It is a slow irregular variable, with a minimum magnitude of 4.05 and a maximum magnitude of 3.77. As a hypergiant, V382 Carinae is a luminous star, with 212,000 times more luminosity than the Sun and over 480 times the Sun's size.
Two bright Mira variable stars are in Carina: R Carinae and S Carinae; both stars are red giants. R Carinae has a minimum magnitude of 10.0 and a maximum magnitude of 4.0. Its period is 309 days and it is 416 light-years from Earth. S Carinae is similar, with a minimum magnitude of 10.0 and a maximum magnitude of 5.0. However, S Carinae has a shorter period—150 days, though it is much more distant at 1,300 light-years from Earth.
Carina is home to several double stars and binary stars. Upsilon Carinae is a binary star with two blue-white-hued giant components, 1,600 light-years from Earth. The primary is of magnitude 3.0 and the secondary is of magnitude 6.0; the two components are distinguishable in a small amateur telescope.
Two asterisms are prominent in Carina. The 'Diamond Cross' is composed of the stars Beta, Theta, Upsilon and Omega Carinae. The Diamond Cross is visible south of 20ºN latitude, and is larger but fainter than the Southern Cross in Crux. Flanking the Diamond Cross is the False cross, composed of four stars - two stars in Carina, Iota Carinae and Epsilon Carinae, and two stars in Vela, Kappa Velorum and Delta Velorum - and is often mistaken for the Southern Cross, causing errors in astronavigation.
Deep-sky objects
Carina is known for its namesake nebula, NGC 3372, discovered by French astronomer Nicolas-Louis de Lacaille in 1751, which contains several nebulae. The Carina Nebula overall is an extended emission nebula approximately 8,000 light-years away and 300 light-years wide that includes vast star-forming regions. It has an overall magnitude of 8.0 and an apparent diameter of over 2 degrees. Its central region is called the Keyhole, or the Keyhole Nebula. This was described in 1847 by John Herschel, and likened to a keyhole by Emma Converse in 1873. The Keyhole is about seven light-years wide and is composed mostly of ionized hydrogen, with two major star-forming regions. The Homunculus Nebula is a planetary nebula visible to the naked eye that is being ejected by the erratic luminous blue variable star Eta Carinae, the most massive visible star known. Eta Carinae is so massive that it has reached the theoretical upper limit for the mass of a star and is therefore unstable. It is known for its outbursts; in 1840 it briefly became one of the brightest stars in the sky due to a particularly massive outburst, which largely created the Homunculus Nebula. Because of this instability and history of outbursts, Eta Carinae is considered a prime supernova candidate for the next several hundred thousand years because it has reached the end of its estimated million-year life span.
NGC 2516 is an open cluster that is both quite large (approximately half a degree square) and bright, visible to the unaided eye. It is located 1,100 light-years from Earth and has approximately 80 stars, the brightest of which is a red giant star of magnitude 5.2. NGC 3114 is another open cluster approximately of the same size, though it is more distant at 3,000 light-years from Earth. It is more loose and dim than NGC 2516, as its brightest stars are only 6th magnitude. The most prominent open cluster in Carina is IC 2602, also called the "Southern Pleiades". It contains Theta Carinae, along with several other stars visible to the unaided eye. In total, the cluster possesses approximately 60 stars. The Southern Pleiades is particularly large for an open cluster, with a diameter of approximately one degree. Like IC 2602, NGC 3532 is visible to the unaided eye and is of comparable size. It possesses approximately 150 stars that are arranged in an unusual shape, approximating an ellipse with a dark central area. Several prominent orange giants are among the cluster's bright stars, of the 7th magnitude. Superimposed on the cluster is Chi Carinae, a yellow-white-hued star of magnitude 3.9, far more distant than NGC 3532.
Carina also contains the naked-eye globular cluster NGC 2808. Epsilon Carinae and Upsilon Carinae are double stars visible in small telescopes.
One noted galaxy cluster is 1E 0657-56, the Bullet Cluster. At a distance of 4 billion light-years (redshift 0.296), this galaxy cluster is named for the shock wave seen in the intracluster medium, which resembles the shock wave of a supersonic bullet. The bow shock visible is thought to be due to the smaller galaxy cluster moving through the intracluster medium at a relative speed of 3,000–4,000 kilometers per second to the larger cluster. Because this gravitational interaction has been ongoing for hundreds of millions of years, the smaller cluster is being destroyed and will eventually merge with the larger cluster.
Meteors
Carina contains the radiant of the Eta Carinids meteor shower, which peaks around January 21 each year.
Equivalents
From China (especially northern China), the stars of Carina can barely be seen. The star Canopus (the south polar star in Chinese astronomy) was located by Chinese astronomers in the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què). The rest of the stars were first classified by Xu Guanggi during the Ming dynasty, based on the knowledge acquired from western star charts, and placed among The Southern Asterisms (近南極星區, Jìnnánjíxīngōu).
Polynesian peoples had no name for the constellation in particular, though they had many names for Canopus.
The Māori name Ariki ("High-born"), and the Hawaiian Ke Alii-o-kona-i-ka-lewa, "The Chief of the southern expanse" both attest to the star's prominence in the southern sky, while the Māori Atutahi, "First-light" or "Single-light", and the Tuamotu Te Tau-rari and Marere-te-tavahi, "He who stands alone". refer to the star's solitary nature.
It was also called Kapae-poto ("Short horizon"), because it rarely sets from the vantage point of New Zealand, and Kauanga ("Solitary"), when it was the last star visible before sunrise.
Future
Carina is in the southern sky quite near the south celestial pole, making it never set (circumpolar) for most of the southern hemisphere. Due to precession of Earth's axis, by the year 4700 the south celestial pole will be in Carina. Three bright stars in Carina will come within 1 degree of the southern celestial pole and take turns as the southern pole star: Omega Carinae (mag 3.29) in 5600, Upsilon Carinae (mag 2.97) in 6700, and Iota Carinae (mag 2.21) in 7900. About 13,860 CE, the bright Canopus (−0.7) will have a greater declination than −82°.
Namesakes
was a United States Navy Crater-class cargo ship named after the constellation.
the Toyota Carina was named after it.
| Physical sciences | Other | Astronomy |
6364 | https://en.wikipedia.org/wiki/Camelopardalis | Camelopardalis | Camelopardalis is a large but faint constellation of the northern sky representing a giraffe. The constellation was introduced in 1612 or 1613 by Petrus Plancius. Some older astronomy books give Camelopardalus or Camelopardus as alternative forms of the name, but the version recognized by the International Astronomical Union matches the genitive form, seen suffixed to most of its key stars.
Etymology
First attested in English in 1785, the word camelopardalis comes from Latin, and it is the romanization of the Greek "καμηλοπάρδαλις" meaning "giraffe", from "κάμηλος" (kamēlos), "camel" + "πάρδαλις" (pardalis), "spotted", because it has a long neck like a camel and spots like a leopard.
Features
Stars
Although Camelopardalis is the 18th largest constellation, it is not a particularly bright constellation, as the brightest stars are only of fourth magnitude. In fact, it only contains four stars brighter than magnitude 5.0.
α Cam is a blue-hued supergiant star of magnitude 4.3, over 6,000 light-years from Earth. It is one of the most distant stars easily visible with the naked eye.
β Cam is the brightest star in Camelopardalis with an apparent magnitude of 4.03. This star is a double star, with components of magnitudes 4.0 and 8.6. The primary is a yellow-hued supergiant 1000 light-years from Earth.
11 Cam is a star of magnitude 5.2, 650 light-years from Earth. It appears without intense magnification very close to magnitude 6.1 12 Cam, at about the same distance from us, but the two are not a true double star; they have considerable separation.
Σ 1694 (Struve 1694, 32H Cam) is a binary star 300 light-years from Earth. Both components have a blue-white hue; the primary is of magnitude 5.4 and the secondary is of magnitude 5.9.
CS Cam is the second brightest star, though it has neither a Bayer nor a Flamsteed designation. It is of magnitude 4.21 and is slightly variable.
Z Cam (varying from amateur telescope visibility to extremely faint) is frequently observed as part of a program of AAVSO. It is the prototype of Z Camelopardalis variable stars.
Other variable stars are U Camelopardalis, VZ Camelopardalis, and Mira variables T Camelopardalis, X Camelopardalis, and R Camelopardalis. RU Camelopardalis is one of the brighter Type II Cepheids visible in the night sky.
In 2011 a supernova was discovered in the constellation.
Deep-sky objects
Camelopardalis is in the part of the celestial sphere facing away from the galactic plane. Accordingly, many distant galaxies are visible within its borders.
NGC 2403 is a galaxy in the M81 group of galaxies, located approximately 12 million light-years from Earth with a redshift of 0.00043. It is classified as being between an elliptical and a spiral galaxy because it has faint arms and a large central bulge. NGC 2403 was first discovered by the 18th century astronomer William Herschel, who was working in England at the time. It has an integrated magnitude of 8.0 and is approximately 0.25° long.
NGC 1502 is a magnitude 6.9 open cluster about 3,000 light years from Earth. It has about 45 bright members, and features also a double star of magnitude 7.0 at its center. NGC 1502 is also associated with Kemble's Cascade, a simple but beautiful asterism appearing in the sky as a chain of stars 2.5° long that is parallel to the Milky Way and is pointed towards Cassiopeia. * NGC 1501 is a planetary nebula located roughly 1.4° south of NGC 1502.
Stock 23 is an open star cluster at the southern part of the border between Camelopardalis and Cassiopeia. It is also known as Pazmino's Cluster. It could be categorized as an asterism because of the small number of stars in it (a small telescopic constellation).
IC 342 is one of the brightest two galaxies in the IC 342/Maffei Group of galaxies.
The dwarf irregular galaxy NGC 1569 is a magnitude 11.9 starburst galaxy, about 11 million light years away.
NGC 2655 is a large lenticular galaxy with visual magnitude 10.1.
UGC 3697 is known as the Integral Sign Galaxy (its location is 7:11:4 / +71°50').
MS0735.6+7421 is a galaxy cluster with a redshift of 0.216, located 2.6 billion light-years from Earth. It is unique for its intracluster medium, which emits X-rays at a very high rate. This galaxy cluster features two cavities 600,000 light-years in diameter, caused by its central supermassive black hole, which emits jets of matter. MS0735.6+7421 is one of the largest and most distant examples of this phenomenon.
Tombaugh 5 is a fairly dim open cluster in Camelopardalis. It has an overall magnitude of 8.4 and is located 5,800 light-years from Earth. It is a Shapley class c and Trumpler class III 1 r cluster, meaning that it is irregularly shaped and appears loose. Though it is detached from the star field, it is not concentrated at its center at all. It has more than 100 stars which do not vary widely in brightness, mostly being of the 15th and 16th magnitude.
NGC 2146 is an 11th magnitude barred spiral starburst galaxy conspicuously warped by interaction with a neighbour.
MACS0647-JD, one of the possible candidates for the farthest known galaxies in the universe (z= 10.7), is also in Camelopardalis.
Meteor showers
The annual May meteor shower Camelopardalids from comet 209P/LINEAR have a radiant in Camelopardalis.
History
Camelopardalis is not one of Ptolemy's 48 constellations in the Almagest. It was created by Petrus Plancius in 1613. It first appeared in a globe designed by him and produced by Pieter van den Keere. One year later, Jakob Bartsch featured it in his atlas. Johannes Hevelius depicted this constellation in his works which were so influential that it was referred to as Camelopardali Hevelii or abbreviated as Camelopard. Hevel.
Part of the constellation was hived off to form the constellation Sciurus Volans, the Flying Squirrel, by William Croswell in 1810. However this was not taken up by later cartographers.
Equivalents
In Chinese astronomy, the stars of Camelopardalis are located within a group of circumpolar stars called the Purple Forbidden Enclosure (紫微垣 Zǐ Wēi Yuán).
| Physical sciences | Other | Astronomy |
6366 | https://en.wikipedia.org/wiki/Canis%20Major | Canis Major | Canis Major is a constellation in the southern celestial hemisphere. In the second century, it was included in Ptolemy's 48 constellations, and is counted among the 88 modern constellations. Its name is Latin for "greater dog" in contrast to Canis Minor, the "lesser dog"; both figures are commonly represented as following the constellation of Orion the hunter through the sky. The Milky Way passes through Canis Major and several open clusters lie within its borders, most notably M41.
Canis Major contains Sirius, the brightest star in the night sky, known as the "dog star". It is bright because of its proximity to the Solar System and its intrinsic brightness. In contrast, the other bright stars of the constellation are stars of great distance and high luminosity. At magnitude 1.5, Epsilon Canis Majoris (Adhara) is the second-brightest star of the constellation and the brightest source of extreme ultraviolet radiation in the night sky. Next in brightness are the yellow-white supergiant Delta (Wezen) at 1.8, the blue-white giant Beta (Mirzam) at 2.0, blue-white supergiants Eta (Aludra) at 2.4 and Omicron2 at 3.0, and white spectroscopic binary Zeta (Furud), also at 3.0. The red hypergiant VY CMa is one of the largest stars known, while the neutron star RX J0720.4-3125 has a radius of a mere 5 km.
History and mythology
In western astronomy
In ancient Mesopotamia, Sirius, named KAK.SI.SA2 by the Babylonians, was seen as an arrow aiming towards Orion, while the southern stars of Canis Major and a part of Puppis were viewed as a bow, named BAN in the Three Stars Each tablets, dating to around 1100 BC. In the later compendium of Babylonian astronomy and astrology titled MUL.APIN, the arrow, Sirius, was also linked with the warrior Ninurta, and the bow with Ishtar, daughter of Enlil. Ninurta was linked to the later deity Marduk, who was said to have slain the ocean goddess Tiamat with a great bow, and worshipped as the principal deity in Babylon. The Ancient Greeks replaced the bow and arrow depiction with that of a dog.
In Greek Mythology, Canis Major represented the dog Laelaps, a gift from Zeus to Europa; or sometimes the hound of Procris, Diana's nymph; or the one given by Aurora to Cephalus, so famed for its speed that Zeus elevated it to the sky. It was also considered to represent one of Orion's hunting dogs, pursuing Lepus the Hare or helping Orion fight Taurus the Bull; and is referred to in this way by Aratos, Homer and Hesiod. The ancient Greeks refer only to one dog, but by Roman times, Canis Minor appears as Orion's second dog. Alternative names include Canis Sequens and Canis Alter. Canis Syrius was the name used in the 1521 Alfonsine tables.
The Roman myth refers to Canis Major as Custos Europae, the dog guarding Europa but failing to prevent her abduction by Jupiter in the form of a bull, and as Janitor Lethaeus, "the watchdog". In medieval Arab astronomy, the constellation became al-Kalb al-Akbar, "the Greater Dog", transcribed as Alcheleb Alachbar by 17th century writer Edmund Chilmead. Islamic scholar Abū Rayḥān al-Bīrūnī referred to Orion as Kalb al-Jabbār, "the Dog of the Giant". Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called Merzem, includes the stars of Canis Major and Canis Minor and is the herald of two weeks of hot weather.
In non-western astronomy
In Chinese astronomy, the modern constellation of Canis Major is located in the Vermilion Bird (), where the stars were classified in several separate asterisms of stars. The Military Market () was a circular pattern of stars containing Nu3, Beta, Xi1 and Xi2, and some stars from Lepus. The Wild Cockerel () was at the centre of the Military Market, although it is uncertain which stars depicted what. Schlegel reported that the stars Omicron and Pi Canis Majoris might have been them, while Beta or Nu2 have also been proposed. Sirius was (), the Celestial Wolf, denoting invasion and plunder. Southeast of the Wolf was the asterism (), the celestial Bow and Arrow, which was interpreted as containing Delta, Epsilon, Eta and Kappa Canis Majoris and Delta Velorum. Alternatively, the arrow was depicted by Omicron2 and Eta and aiming at Sirius (the Wolf), while the bow comprised Kappa, Epsilon, Sigma, Delta and 164 Canis Majoris, and Pi and Omicron Puppis.
Both the Māori people and the people of the Tuamotus recognized the figure of Canis Major as a distinct entity, though it was sometimes absorbed into other constellations. , also called and , ("The Assembly of " or "The Assembly of Sirius") was a Māori constellation that included both Canis Minor and Canis Major, along with some surrounding stars. Related was , also called , the Mirror of , formed from an undefined group of stars in Canis Major. They called Sirius and , corresponding to two of the names for the constellation, though was a name applied to other stars in various Māori groups and other Polynesian cosmologies. The Tuamotu people called Canis Major , "the abiding assemblage of ".
The Tharumba people of the Shoalhaven River saw three stars of Canis Major as (Bat) and his two wives (Mrs Brown Snake) and (Mrs Black Snake); bored of following their husband around, the women try to bury him while he is hunting a wombat down its hole. He spears them and all three are placed in the sky as the constellation . To the Boorong people of Victoria, Sigma Canis Majoris was (which has become the official name of this star), and its flanking stars Delta and Epsilon were his two wives. The moon (, "native cat") sought to lure the further wife (Epsilon) away, but assaulted him and he has been wandering the sky ever since.
Characteristics
Canis Major is a constellation in the Southern Hemisphere's summer (or northern hemisphere's winter) sky, bordered by Monoceros (which lies between it and Canis Minor) to the north, Puppis to the east and southeast, Columba to the southwest, and Lepus to the west. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CMa". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a quadrilateral; in the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −11.03° and −33.25°. Covering 380 square degrees or 0.921% of the sky, it ranks 43rd of the 88 currently-recognized constellations in size.
Features
Stars
Canis Major is a prominent constellation because of its many bright stars. These include Sirius (Alpha Canis Majoris), the brightest star in the night sky, as well as three other stars above magnitude 2.0. Furthermore, two other stars are thought to have previously outshone all others in the night sky—Adhara (Epsilon Canis Majoris) shone at −3.99 around 4.7 million years ago, and Mirzam (Beta Canis Majoris) peaked at −3.65 around 4.42 million years ago. Another, NR Canis Majoris, will be brightest at magnitude −0.88 in about 2.87 million years' time.
The German cartographer Johann Bayer used the Greek letters Alpha through Omicron to label the most prominent stars in the constellation, including three adjacent stars as Nu and two further pairs as Xi and Omicron, while subsequent observers designated further stars in the southern parts of the constellation that were hard to discern from Central Europe. Bayer's countryman Johann Elert Bode later added Sigma, Tau and Omega; the French astronomer Nicolas Louis de Lacaille added lettered stars a to k (though none are in use today). John Flamsteed numbered 31 stars, with 3 Canis Majoris being placed by Lacaille into Columba as Delta Columbae (Flamsteed had not recognised Columba as a distinct constellation). He also labelled two stars—his 10 and 13 Canis Majoris—as Kappa1 and Kappa2 respectively, but subsequent cartographers such as Francis Baily and John Bevis dropped the fainter former star, leaving Kappa2 as the sole Kappa. Flamsteed's listing of Nu1, Nu2, Nu3, Xi1, Xi2, Omicron1 and Omicron2 have all remained in use.
Sirius is the brightest star in the night sky at apparent magnitude −1.46 and one of the closest stars to Earth at a distance of 8.6 light-years. Its name comes from the Greek word for "scorching" or "searing". Sirius is also a binary star; its companion Sirius B is a white dwarf with a magnitude of 8.4–10,000 times fainter than Sirius A to observers on Earth. The two orbit each other every 50 years. Their closest approach last occurred in 1993 and they will be at their greatest separation between 2020 and 2025. Sirius was the basis for the ancient Egyptian calendar. The star marked the Great Dog's mouth on Bayer's star atlas.
Flanking Sirius are Beta and Gamma Canis Majoris. Also called Mirzam or Murzim, Beta is a blue-white Beta Cephei variable star of magnitude 2.0, which varies by a few hundredths of a magnitude over a period of six hours. Mirzam is 500 light-years from Earth, and its traditional name means "the announcer", referring to its position as the "announcer" of Sirius, as it rises a few minutes before Sirius does. Gamma, also known as Muliphein, is a fainter star of magnitude 4.12, in reality a blue-white bright giant of spectral type B8IIe located 441 light-years from earth. Iota Canis Majoris, lying between Sirius and Gamma, is another star that has been classified as a Beta Cephei variable, varying from magnitude 4.36 to 4.40 over a period of 1.92 hours. It is a remote blue-white supergiant star of spectral type B3Ib, around 46,000 times as luminous as the sun and, at 2500 light-years distant, 300 times further away than Sirius.
Epsilon, Omicron2, Delta, and Eta Canis Majoris were called Al Adzari "the virgins" in medieval Arabic tradition. Marking the dog's right thigh on Bayer's atlas is Epsilon Canis Majoris, also known as Adhara. At magnitude 1.5, it is the second-brightest star in Canis Major and the 23rd-brightest star in the sky. It is a blue-white supergiant of spectral type B2Iab, around 404 light-years from Earth. This star is one of the brightest known extreme ultraviolet sources in the sky. It is a binary star; the secondary is of magnitude 7.4. Its traditional name means "the virgins", having been transferred from the group of stars to Epsilon alone. Nearby is Delta Canis Majoris, also called Wezen. It is a yellow-white supergiant of spectral type F8Iab and magnitude 1.84, around 1605 light-years from Earth. With a traditional name meaning "the weight", Wezen is 17 times as massive and 50,000 times as luminous as the Sun. If located in the centre of the Solar System, it would extend out to Earth as its diameter is 200 times that of the Sun. Only around 10 million years old, Wezen has stopped fusing hydrogen in its core. Its outer envelope is beginning to expand and cool, and in the next 100,000 years it will become a red supergiant as its core fuses heavier and heavier elements. Once it has a core of iron, it will collapse and explode as a supernova. Nestled between Adhara and Wezen lies Sigma Canis Majoris, known as Unurgunite to the Boorong and Wotjobaluk people, a red supergiant of spectral type K7Ib that varies irregularly between magnitudes 3.43 and 3.51.
Also called Aludra, Eta Canis Majoris is a blue-white supergiant of spectral type B5Ia with a luminosity 176,000 times and diameter around 80 times that of the Sun. Classified as an Alpha Cygni type variable star, Aludra varies in brightness from magnitude 2.38 to 2.48 over a period of 4.7 days. It is located 1120 light-years away. To the west of Adhara lies 3.0-magnitude Zeta Canis Majoris or Furud, around 362 light-years distant from Earth. It is a spectroscopic binary, whose components orbit each other every 1.85 years, the combined spectrum indicating a main star of spectral type B2.5V.
Between these stars and Sirius lie Omicron1, Omicron2, and Pi Canis Majoris. Omicron2 is a massive supergiant star about 21 times as massive as the Sun. Only 7 million years old, it has exhausted the supply of hydrogen at its core and is now processing helium. It is an Alpha Cygni variable that undergoes periodic non-radial pulsations, which cause its brightness to cycle from magnitude 2.93 to 3.08 over a 24.44-day interval. Omicron1 is an orange K-type supergiant of spectral type K2.5Iab that is an irregular variable star, varying between apparent magnitudes 3.78 and 3.99. Around 18 times as massive as the Sun, it shines with 65,000 times its luminosity.
North of Sirius lie Theta and Mu Canis Majoris, Theta being the most northerly star with a Bayer designation in the constellation. Around 8 billion years old, it is an orange giant of spectral type K4III that is around as massive as the Sun but has expanded to 30 times the Sun's diameter. Mu is a multiple star system located around 1244 light-years distant, its components discernible in a small telescope as a 5.3-magnitude yellow-hued and 7.1-magnitude bluish star. The brighter star is a giant of spectral type K2III, while the companion is a main sequence star of spectral type B9.5V. Nu1 Canis Majoris is a yellow-hued giant star of magnitude 5.7, 278 light-years away; it is at the threshold of naked-eye visibility. It has a companion of magnitude 8.1.
At the southern limits of the constellation lie Kappa and Lambda Canis Majoris. Although of similar spectra and nearby each other as viewed from Earth, they are unrelated. Kappa is a Gamma Cassiopeiae variable of spectral type B2Vne, which brightened by 50% between 1963 and 1978, from magnitude 3.96 or so to 3.52. It is around 659 light-years distant. Lambda is a blue-white B-type main sequence dwarf with an apparent magnitude of 4.48 located around 423 light-years from Earth. It is 3.7 times as wide as and 5.5 times as massive as the Sun, and shines with 940 times its luminosity.
Canis Major is also home to many variable stars. EZ Canis Majoris is a Wolf–Rayet star of spectral type WN4 that varies between magnitudes 6.71 and 6.95 over a period of 3.766 days; the cause of its variability is unknown but thought to be related to its stellar wind and rotation. VY Canis Majoris is a remote red hypergiant located approximately 3,800 light-years away from Earth. It is one of largest stars known (sometimes described as the largest known) and is also one of the most luminous with a radius varying from 1,420 to 2,200 times the Sun's radius, and a luminosity around 300,000 times greater than the Sun. Its current mass is about 17 ± 8 solar masses, having shed material from an initial mass of 25–32 solar masses. VY CMa is also surrounded by a red reflection nebula that has been made by the material expelled by the strong stellar winds of its central star. W Canis Majoris is a type of red giant known as a carbon star—a semiregular variable, it ranges between magnitudes 6.27 and 7.09 over a period of 160 days. A cool star, it has a surface temperature of around 2,900 K and a radius 234 times that of the Sun, its distance estimated at 1,444–1,450 light-years from Earth. At the other extreme in size is RX J0720.4-3125, a neutron star with a radius of around 5 km. Exceedingly faint, it has an apparent magnitude of 26.6. Its spectrum and temperature appear to be mysteriously changing over several years. The nature of the changes are unclear, but it is possible they were caused by an event such as the star's absorption of an accretion disc.
Tau Canis Majoris is a Beta Lyrae-type eclipsing multiple star system that varies from magnitude 4.32 to 4.37 over 1.28 days. Its four main component stars are hot O-type stars, with a combined mass 80 times that of the Sun and shining with 500,000 times its luminosity, but little is known of their individual properties. A fifth component, a magnitude 10 star, lies at a distance of . The system is only 5 million years old. UW Canis Majoris is another Beta Lyrae-type star 3000 light-years from Earth; it is an eclipsing binary that ranges in magnitude from a minimum of 5.3 to a maximum of 4.8. It has a period of 4.4 days; its components are two massive hot blue stars, one a blue supergiant of spectral type O7.5–8 Iab, while its companion is a slightly cooler, less evolved and less luminous supergiant of spectral type O9.7Ib. The stars are 200,000 and 63,000 times as luminous as the Sun. However the fainter star is the more massive at 19 solar masses to the primary's 16. R Canis Majoris is another eclipsing binary that varies from magnitude 5.7 to 6.34 over 1.13 days, with a third star orbiting these two every 93 years. The shortness of the orbital period and the low ratio between the two main components make this an unusual Algol-type system.
Seven star systems have been found to have planets. Nu2 Canis Majoris is an ageing orange giant of spectral type K1III of apparent magnitude 3.91 located around 64 light-years distant. Around 1.5 times as massive and 11 times as luminous as the Sun, it is orbited over a period of 763 days by a planet 2.6 times as massive as Jupiter. HD 47536 is likewise an ageing orange giant found to have a planetary system—echoing the fate of the Solar System in a few billion years as the Sun ages and becomes a giant. Conversely, HD 45364 is a star 107 light-years distant that is a little smaller and cooler than the Sun, of spectral type G8V, which has two planets discovered in 2008. With orbital periods of 228 and 342 days, the planets have a 3:2 orbital resonance, which helps stabilise the system. HD 47186 is another sunlike star with two planets; the inner—HD 47186 b—takes four days to complete an orbit and has been classified as a Hot Neptune, while the outer—HD 47186 c—has an eccentric 3.7-year period orbit and has a similar mass to Saturn. HD 43197 is a sunlike star around 183 light-years distant that has two planets: a hot Jupiter-size planet with an eccentric orbit. The other planet, HD 43197 c, is another massive Jovian planet with a slightly oblong orbit outside of its habitable zone.
Z Canis Majoris is a star system a mere 300,000 years old composed of two pre-main-sequence stars—a FU Orionis star and a Herbig Ae/Be star, which has brightened episodically by two magnitudes to magnitude 8 in 1987, 2000, 2004 and 2008. The more massive Herbig Ae/Be star is enveloped in an irregular roughly spherical cocoon of dust that has an inner diameter of and outer diameter of . The cocoon has a hole in it through which light shines that covers an angle of 5 to 10 degrees of its circumference. Both stars are surrounded by a large envelope of in-falling material left over from the original cloud that formed the system. Both stars are emitting jets of material, that of the Herbig Ae/Be star being much larger—11.7 light-years long. Meanwhile, FS Canis Majoris is another star with infra-red emissions indicating a compact shell of dust, but it appears to be a main-sequence star that has absorbed material from a companion. These stars are thought to be significant contributors to interstellar dust.
Deep-sky objects
The band of the Milky Way goes through Canis Major, with only patchy obscurement by interstellar dust clouds. It is bright in the northeastern corner of the constellation, as well as in a triangular area between Adhara, Wezen and Aludra, with many stars visible in binoculars. Canis Major boasts several open clusters. The only Messier object is M41 (NGC 2287), an open cluster with a combined visual magnitude of 4.5, around 2300 light-years from Earth. Located 4 degrees south of Sirius, it contains contrasting blue, yellow and orange stars and covers an area the apparent size of the full moon—in reality around 25 light-years in diameter. Its most luminous stars have already evolved into giants. The brightest is a 6.3-magnitude star of spectral type K3. Located in the field is 12 Canis Majoris, though this star is only 670 light-years distant. NGC 2360, known as Caroline's Cluster after its discoverer Caroline Herschel, is an open cluster located 3.5 degrees west of Muliphein and has a combined apparent magnitude of 7.2. Around 15 light-years in diameter, it is located 3700 light-years away from Earth, and has been dated to around 2.2 billion years old. NGC 2362 is a small, compact open cluster, 5200 light-years from Earth. It contains about 60 stars, of which Tau Canis Majoris is the brightest member. Located around 3 degrees northeast of Wezen, it covers an area around 12 light-years in diameter, though the stars appear huddled around Tau when seen through binoculars. It is a very young open cluster as its member stars are only a few million years old. Lying 2 degrees southwest of NGC 2362 is NGC 2354 a fainter open cluster of magnitude 6.5, with around 15 member stars visible with binoculars. Located around 30' northeast of NGC 2360, NGC 2359 (Thor's Helmet or the Duck Nebula) is a relatively bright emission nebula in Canis Major, with an approximate magnitude of 10, which is 10,000 light-years from Earth. The nebula is shaped by HD 56925, an unstable Wolf–Rayet star embedded within it.
In 2003, an overdensity of stars in the region was announced to be the Canis Major Dwarf, the closest satellite galaxy to Earth. However, there remains debate over whether it represents a disrupted dwarf galaxy or in fact a variation in the thin and thick disk and spiral arm populations of the Milky Way. Investigation of the area yielded only ten RR Lyrae variables—consistent with the Milky Way's halo and thick disk populations rather than a separate dwarf spheroidal galaxy. On the other hand, a globular cluster in Puppis, NGC 2298—which appears to be part of the Canis Major dwarf system—is extremely metal-poor, suggesting it did not arise from the Milky Way's thick disk, and instead is of extragalactic origin.
NGC 2207 and IC 2163 are a pair of face-on interacting spiral galaxies located 125 million light-years from Earth. About 40 million years ago, the two galaxies had a close encounter and are now moving farther apart; nevertheless, the smaller IC 2163 will eventually be incorporated into NGC 2207. As the interaction continues, gas and dust will be perturbed, sparking extensive star formation in both galaxies. Supernovae have been observed in NGC 2207 in 1975 (type Ia SN 1975a), 1999 (the type Ib SN 1999ec), 2003 (type 1b supernova SN 2003H), and 2013 (type II supernova SN 2013ai). Located 16 million light-years distant, ESO 489-056 is an irregular dwarf- and low-surface-brightness galaxy that has one of the lowest metallicities known.
| Physical sciences | Constellations | null |
6367 | https://en.wikipedia.org/wiki/Canis%20Minor | Canis Minor | Canis Minor is a small constellation in the northern celestial hemisphere. In the second century, it was included as an asterism, or pattern, of two stars in Ptolemy's 48 constellations, and it is counted among the 88 modern constellations. Its name is Latin for "lesser dog", in contrast to Canis Major, the "greater dog"; both figures are commonly represented as following the constellation of Orion the hunter.
Canis Minor contains only two stars brighter than the fourth magnitude, Procyon (Alpha Canis Minoris), with a magnitude of 0.34, and Gomeisa (Beta Canis Minoris), with a magnitude of 2.9. The constellation's dimmer stars were noted by Johann Bayer, who named eight stars including Alpha and Beta, and John Flamsteed, who numbered fourteen. Procyon is the eighth-brightest star in the night sky, as well as one of the closest. A yellow-white main-sequence star, it has a white dwarf companion. Gomeisa is a blue-white main-sequence star. Luyten's Star is a ninth-magnitude red dwarf and the Solar System's next closest stellar neighbour in the constellation after Procyon. Additionally, Procyon and Luyten's Star are only 1.12 light-years away from each other, and Procyon would be the brightest star in Luyten's Star's sky. The fourth-magnitude HD 66141, which has evolved into an orange giant towards the end of its life cycle, was discovered to have a planet in 2012. There are two faint deep-sky objects within the constellation's borders. The 11 Canis-Minorids are a meteor shower that can be seen in early December.
History and mythology
Though strongly associated with the Classical Greek uranographic tradition, Canis Minor originates from ancient Mesopotamia. Procyon and Gomeisa were called MASH.TAB.BA or "twins" in the Three Stars Each tablets, dating to around 1100 BC. In the later MUL.APIN, this name was also applied to the pairs of Pi3 and Pi4 Orionis and Zeta and Xi Orionis. The meaning of MASH.TAB.BA evolved as well, becoming the twin deities Lulal and Latarak, who are on the opposite side of the sky from Papsukkal, the True Shepherd of Heaven in Babylonian mythology. Canis Minor was also given the name DAR.LUGAL, its position defined as "the star which stands behind it [Orion]", in the MUL.APIN; the constellation represents a rooster. This name may have also referred to the constellation Lepus. DAR.LUGAL was also denoted DAR.MUŠEN and DAR.LUGAL.MUŠEN in Babylonia. Canis Minor was then called tarlugallu in Akkadian astronomy.
Canis Minor was one of the original 48 constellations formulated by Ptolemy in his second-century Almagest, in which it was defined as a specific pattern (asterism) of stars; Ptolemy identified only two stars and hence no depiction was possible. The Ancient Greeks called the constellation προκυων/Procyon, "coming before the dog", transliterated into Latin as Antecanis, Praecanis, or variations thereof, by Cicero and others. Roman writers also appended the descriptors parvus, minor or minusculus ("small" or "lesser", for its faintness), septentrionalis ("northerly", for its position in relation to Canis Major), primus (rising "first") or sinister (rising to the "left") to its name Canis.
In Greek mythology, Canis Minor was sometimes connected with the Teumessian Fox, a beast turned into stone with its hunter, Laelaps, by Zeus, who placed them in heaven as Canis Major (Laelaps) and Canis Minor (Teumessian Fox). Eratosthenes accompanied the Little Dog with Orion, while Hyginus linked the constellation with Maera, a dog owned by Icarius of Athens. On discovering the latter's death, the dog and Icarius' daughter Erigone took their lives and all three were placed in the sky—Erigone as Virgo and Icarius as Boötes. As a reward for his faithfulness, the dog was placed along the "banks" of the Milky Way, which the ancients believed to be a heavenly river, where he would never suffer from thirst.
The medieval Arabic astronomers maintained the depiction of Canis Minor (al-Kalb al-Asghar in Arabic) as a dog; in his Book of the Fixed Stars, Abd al-Rahman al-Sufi included a diagram of the constellation with a canine figure superimposed. There was one slight difference between the Ptolemaic vision of Canis Minor and the Arabic; al-Sufi claims Mirzam, now assigned to Orion, as part of both Canis Minor—the collar of the dog—and its modern home. The Arabic names for both Procyon and Gomeisa alluded to their proximity and resemblance to Sirius, though they were not direct translations of the Greek; Procyon was called ash-Shi'ra ash-Shamiya, the "Syrian Sirius" and Gomeisa was called ash-Shira al-Ghamisa, the Sirius with bleary eyes. Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called Merzem, includes the stars of Canis Minor and Canis Major and is the herald of two weeks of hot weather.
The ancient Egyptians thought of this constellation as Anubis, the jackal god.
Alternative names have been proposed: Johann Bayer in the early 17th century termed the constellation Fovea "The Pit", and Morus "Sycamine Tree". Seventeenth-century German poet and author Philippus Caesius linked it to the dog of Tobias from the Apocrypha. Richard A. Proctor gave the constellation the name Felis "the Cat" in 1870 (contrasting with Canis Major, which he had abbreviated to Canis "the Dog"), explaining that he sought to shorten the constellation names to make them more manageable on celestial charts. Occasionally, Canis Minor is confused with Canis Major and given the name Canis Orionis ("Orion's Dog").
In non-Western astronomy
In Chinese astronomy, the stars corresponding to Canis Minor lie in the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què). Procyon, Gomeisa and Eta Canis Minoris form an asterism known as Nánhé, the Southern River. With its counterpart, the Northern River Beihe (Castor and Pollux), Nánhé was also associated with a gate or sentry. Along with Zeta and 8 Cancri, 6 Canis Minoris and 11 Canis Minoris formed the asterism Shuiwei, which literally means "water level". Combined with additional stars in Gemini, Shuiwei represented an official who managed floodwaters or a marker of the water level. Neighboring Korea recognized four stars in Canis Minor as part of a different constellation, "the position of the water". This constellation was located in the Red Bird, the southern portion of the sky.
Polynesian peoples often did not recognize Canis Minor as a constellation, but they saw Procyon as significant and often named it; in the Tuamotu Archipelago it was known as Hiro, meaning "twist as a thread of coconut fiber", and Kopu-nui-o-Hiro ("great paunch of Hiro"), which was either a name for the modern figure of Canis Minor or an alternative name for Procyon. Other names included Vena (after a goddess), on Mangaia and Puanga-hori (false Puanga, the name for Rigel), in New Zealand. In the Society Islands, Procyon was called Ana-tahua-vahine-o-toa-te-manava, literally "Aster the priestess of brave heart", figuratively the "pillar for elocution". The Wardaman people of the Northern Territory in Australia gave Procyon and Gomeisa the names Magum and Gurumana, describing them as humans who were transformed into gum trees in the dreamtime. Although their skin had turned to bark, they were able to speak with a human voice by rustling their leaves.
The Aztec calendar was related to their cosmology. The stars of Canis Minor were incorporated along with some stars of Orion and Gemini into an asterism associated with the day called "Water".
Characteristics
Lying directly south of Gemini's bright stars Castor and Pollux, Canis Minor is a small constellation bordered by Monoceros to the south, Gemini to the north, Cancer to the northeast, and Hydra to the east. It does not border Canis Major; Monoceros is in between the two. Covering 183 square degrees, Canis Minor ranks seventy-first of the 88 constellations in size. It appears prominently in the southern sky during the Northern Hemisphere's winter. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 14 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . Most visible in the evening sky from January to March, Canis Minor is most prominent at 10 p.m. during mid-February. It is then seen earlier in the evening until July, when it is only visible after sunset before setting itself, and rising in the morning sky before dawn. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "CMi".
Features
Stars
Canis Minor contains only two stars brighter than fourth magnitude. At magnitude 0.34, Procyon, or Alpha Canis Minoris, is the eighth-brightest star in the night sky, as well as one of the closest. Its name means "before the dog" or "preceding the dog" in Greek, as it rises an hour before the "Dog Star", Sirius, of Canis Major. It is a binary star system, consisting of a yellow-white main-sequence star of spectral type F5 IV-V, named Procyon A, and a faint white dwarf companion of spectral type DA, named Procyon B. Procyon B, which orbits the more massive star every 41 years, is of magnitude 10.7. Procyon A is 1.4 times the Sun's mass, while its smaller companion is 0.6 times as massive as the Sun. The system is from Earth, the shortest distance to a northern-hemisphere star of the first magnitude. Gomeisa, or Beta Canis Minoris, with a magnitude of 2.89, is the second-brightest star in Canis Minor. Lying from the Solar System, it is a blue-white main-sequence star of spectral class B8 Ve. Although fainter to Earth observers, it is much brighter than Procyon, and is 250 times as luminous and three times as massive as the Sun. Although its variations are slight, Gomeisa is classified as a shell star (Gamma Cassiopeiae variable), with a maximum magnitude of 2.84 and a minimum magnitude of 2.92. It is surrounded by a disk of gas which it heats and causes to emit radiation.
Johann Bayer used the Greek letters Alpha to Eta to label the most prominent eight stars in the constellation, designating two stars as Delta (named Delta1 and Delta2). John Flamsteed numbered fourteen stars, discerning a third star he named Delta3; his star 12 Canis Minoris was not found subsequently. In Bayer's 1603 work Uranometria, Procyon is located on the dog's belly, and Gomeisa on its neck. Gamma, Epsilon and Eta Canis Minoris lie nearby, marking the dog's neck, crown and chest, respectively. Although it has an apparent magnitude of 4.34, Gamma Canis Minoris is an orange K-type giant of spectral class K3-III C, which lies away. Its colour is obvious when seen through binoculars. It is a multiple system, consisting of the spectroscopic binary Gamma A and three optical companions, Gamma B, magnitude 13; Gamma C, magnitude 12; and Gamma D, magnitude 10. The two components of Gamma A orbit each other every 389.2 days, with an eccentric orbit that takes their separation between 2.3 and 1.4 astronomical units (AU). Epsilon Canis Minoris is a yellow bright giant of spectral class G6.5IIb of magnitude of 4.99. It lies from Earth, with 13 times the diameter and 750 times the luminosity of the Sun. Eta Canis Minoris is a giant of spectral class F0III of magnitude 5.24, which has a yellowish hue when viewed through binoculars as well as a faint companion of magnitude 11.1. Located 4 arcseconds from the primary, the companion star is actually around 440 AU from the main star and takes around 5,000 years to orbit it.
Near Procyon, three stars share the name Delta Canis Minoris. Delta1 is a yellow-white F-type giant of magnitude 5.25 located around from Earth. About 360 times as luminous and 3.75 times as massive as the Sun, it is expanding and cooling as it ages, having spent much of its life as a main sequence star of spectrum B6V. Also known as 8 Canis Minoris, Delta2 is an F-type main-sequence star of spectral type F2V and magnitude 5.59 which is distant. The last of the trio, Delta3 (also known as 9 Canis Minoris), is a white main sequence star of spectral type A0Vnn and magnitude 5.83 which is distant. These stars mark the paws of the Lesser Dog's left hind leg, while magnitude 5.13 Zeta marks the right. A blue-white bright giant of spectral type B8II, Zeta lies around away from the Solar System.
Lying 222 ± 7 light-years away with an apparent magnitude of 4.39, HD 66141 is 6.8 billion years old and has evolved into an orange giant of spectral type K2III with a diameter around 22 times that of the Sun, and weighing 1.1 solar masses. It is 174 times as luminous as the Sun, with an absolute magnitude of −0.15. HD 66141 was mistakenly named 13 Puppis, as its celestial coordinates were recorded incorrectly when catalogued and hence mistakenly thought to be in the constellation of Puppis; Bode gave it the name Lambda Canis Minoris, which is now obsolete. The orange giant is orbited by a planet, HD 66141b, which was detected in 2012 by measuring the star's radial velocity. The planet has a mass around 6 times that of Jupiter and a period of 480 days.
A red giant of spectral type M4III, BC Canis Minoris lies around distant from the Solar System. It is a semiregular variable star that varies between a maximum magnitude of 6.14 and minimum magnitude of 6.42. Periods of 27.7, 143.3 and 208.3 days have been recorded in its pulsations. AZ, AD and BI Canis Minoris are Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. AZ is of spectral type A5IV, and ranges between magnitudes 6.44 and 6.51 over a period of 2.3 hours. AD has a spectral type of F2III, and has a maximum magnitude of 9.21 and minimum of 9.51, with a period of approximately 2.95 hours. BI is of spectral type F2 with an apparent magnitude varying around 9.19 and a period of approximately 2.91 hours.
At least three red giants are Mira variables in Canis Minor. S Canis Minoris, of spectral type M7e, is the brightest, ranging from magnitude 6.6 to 13.2 over a period of 332.94 days. V Canis Minoris ranges from magnitude 7.4 to 15.1 over a period of 366.1 days. Similar in magnitude is R Canis Minoris, which has a maximum of 7.3, but a significantly brighter minimum of 11.6. An S-type star, it has a period of 337.8 days.
YZ Canis Minoris is a red dwarf of spectral type M4.5V and magnitude 11.2, roughly three times the size of Jupiter and from Earth. It is a flare star, emitting unpredictable outbursts of energy for mere minutes, which might be much more powerful analogues of solar flares. Luyten's Star (GJ 273) is a red dwarf star of spectral type M3.5V and close neighbour of the Solar System. Its visual magnitude of 9.9 renders it too faint to be seen with the naked eye, even though it is only away. Fainter still is PSS 544-7, an eighteenth-magnitude red dwarf around 20 per cent the mass of the Sun, located from Earth. First noticed in 1991, it is thought to be a cannonball star, shot out of a star cluster and now moving rapidly through space directly away from the galactic disc.
The WZ Sagittae-type dwarf nova DY Canis Minoris (also known as VSX J074727.6+065050) flared up to magnitude 11.4 over January and February 2008 before dropping eight magnitudes to around 19.5 over approximately 80 days. It is a remote binary star system where a white dwarf and low-mass star orbit each other close enough for the former star to draw material off the latter and form an accretion disc. This material builds up until it erupts dramatically.
Deep-sky objects
The Milky Way passes through much of Canis Minor, yet it has few deep-sky objects. William Herschel recorded four objects in his 1786 work Catalogue of Nebulae and Clusters of Stars, including two he mistakenly believed were star clusters. NGC 2459 is a group of five thirteenth- and fourteenth-magnitude stars that appear to lie close together in the sky but are not related. A similar situation has occurred with NGC 2394, also in Canis Minor. This is a collection of fifteen unrelated stars of ninth magnitude and fainter.
Herschel also observed three faint galaxies, two of which are interacting with each other. NGC 2508 is a lenticular galaxy of thirteenth magnitude, estimated at 205 million light-years' distance (63 million parsecs) with a diameter of . Named as a single object by Herschel, NGC 2402 is actually a pair of near-adjacent galaxies that appear to be interacting with each other. Only of fourteenth and fifteenth magnitudes, respectively, the elliptical and spiral galaxy are thought to be approximately 245 million light-years distant, and each measure 55,000 light-years in diameter.
Meteor showers
The 11 Canis-Minorids, also called the Beta Canis Minorids, are a meteor shower that arise near the fifth-magnitude star 11 Canis Minoris and were discovered in 1964 by Keith Hindley, who investigated their trajectory and proposed a common origin with the comet D/1917 F1 Mellish. However, this conclusion has been refuted subsequently as the number of orbits analysed was low and their trajectories too disparate to confirm a link. They last from 4 to 15 December, peaking over 10 and 11 December.
| Physical sciences | Other | Astronomy |
6371 | https://en.wikipedia.org/wiki/Centaurus | Centaurus | Centaurus is a bright constellation in the southern sky. One of the largest constellations, Centaurus was included among the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. In Greek mythology, Centaurus represents a centaur; a creature that is half human, half horse (another constellation named after a centaur is one from the zodiac: Sagittarius). Notable stars include Alpha Centauri, the nearest star system to the Solar System, its neighbour in the sky Beta Centauri, and HR 5171, one of the largest stars yet discovered. The constellation also contains Omega Centauri, the brightest globular cluster as visible from Earth and the largest identified in the Milky Way, possibly a remnant of a dwarf galaxy.
Notable features
Stars
Centaurus contains several very bright stars. Its alpha and beta stars are used as "pointer stars" to help observers find the constellation Crux. Centaurus has 281 stars above magnitude 6.5, meaning that they are visible to the unaided eye, the most of any constellation. Alpha Centauri, the closest star system to the Sun, has a high proper motion; it will be a mere half-degree from Beta Centauri in approximately 4000 years.
Alpha Centauri is a triple star system composed of a binary system orbited by Proxima Centauri, currently the nearest star to the Sun. Traditionally called Rigil Kentaurus (from Arabic رجل قنطورس, meaning "foot of the centaur") or Toliman (from Arabic الظليمين meaning "two male ostriches"), the system has an overall magnitude of −0.28 and is 4.4 light-years from Earth. The primary and secondary are both yellow-hued stars; the first is of magnitude −0.01 and the second: 1.35. Proxima, the tertiary star, is a red dwarf of magnitude 11.0; it appears almost 2 degrees away from the close pairing of Alpha and has a period of approximately one million years. Also a flare star, Proxima has minutes-long outbursts where it brightens by over a magnitude. The Alpha couple revolve in 80-year periodicity and will next appear closest as seen from Earth's telescopes in 2037 and 2038, together as they appear to the naked eye they present the third-brightest "star" in the night sky.
One other first magnitude star Beta Centauri is in the constellation in a position beyond Proxima and toward the narrow axis of Crux, thus with Alpha forming a far-south limb of the constellation. Also called Hadar and Agena, it is a double star; the primary is a blue-hued giant star of magnitude 0.6, 525 light-years from Earth. The secondary is of magnitude 4.0 and has a modest separation, appearing only under intense magnification due to its distance.
The northerly star Theta Centauri, officially named Menkent, is an orange giant star of magnitude 2.06. It is the only bright star of Centaurus that is easily visible from mid-northern latitudes.
The next bright object is Gamma Centauri, a binary star which appears to the naked eye at magnitude 2.2. The primary and secondary are both blue-white hued stars of magnitude 2.9; their period is 84 years.
Centaurus also has many dimmer double stars and binary stars. 3 Centauri is a double star with a blue-white hued primary of magnitude 4.5 and a secondary of magnitude 6.0. The primary is 344 light-years away.
Centaurus is home to many variable stars. R Centauri is a Mira variable star with a minimum magnitude of 11.8 and a maximum magnitude of 5.3; it is about 1,250 light-years from Earth and has a period of 18 months. V810 Centauri is a semiregular variable.
BPM 37093 is a white dwarf star whose carbon atoms are thought to have formed a crystalline structure. Since diamond also consists of carbon arranged in a crystalline lattice (though of a different configuration), scientists have nicknamed this star "Lucy" after the Beatles song "Lucy in the Sky with Diamonds."
PDS 70, (V1032 Centauri) a low mass T Tauri star is found in the constellation Centaurus. In July 2018 astronomers captured the first conclusive image of a protoplanetary disk containing a nascent exoplanet, named PDS 70b.
Deep-sky objects
ω Centauri (NGC 5139), despite being listed as the constellation's "omega" star, is in fact a naked-eye globular cluster, 17,000 light-years away with a diameter of 150 light-years. It is the largest and brightest globular cluster in the Milky Way; at ten times the size of the next-largest cluster, it has a magnitude of 3.7. It is also the most luminous globular cluster in the Milky Way, at over one million solar luminosities. Omega Centauri is classified as a Shapley class VIII cluster, which means that its center is loosely concentrated. It is also one of two only globular clusters to be designated with a Bayer letter; the globular cluster 47 Tucanae (Xi Tucanae) is the only one designated with a Flamsteed number. It contains several million stars, most of which are yellow dwarf stars, but also possesses red giants and blue-white stars; the stars have an average age of 12 billion years. This has prompted suspicion that Omega Centauri was the core of a dwarf galaxy that had been absorbed by the Milky Way. Omega Centauri was determined to be nonstellar in 1677 by the English astronomer Edmond Halley, though it was visible as a star to the ancients. Its status as a globular cluster was determined by James Dunlop in 1827. To the unaided eye, Omega Centauri appears fuzzy and is obviously non-circular; it is approximately half a degree in diameter, the same size as the full Moon.
Centaurus is also home to open clusters. NGC 3766 is an open cluster 6,300 light-years from Earth that is visible to the unaided eye. It contains approximately 100 stars, the brightest of which are 7th magnitude. NGC 5460 is another naked-eye open cluster, 2,300 light-years from Earth, that has an overall magnitude of 6 and contains approximately 40 stars.
There is one bright planetary nebula in Centaurus, NGC 3918, also known as the Blue Planetary. It has an overall magnitude of 8.0 and a central star of magnitude 11.0; it is 2600 light-years from Earth. The Blue Planetary was discovered by John Herschel and named for its color's similarity to Uranus, though the nebula is apparently three times larger than the planet.
Centaurus is rich in galaxies as well. NGC 4622 is a face-on spiral galaxy located 200 million light-years from Earth (redshift 0.0146). Its spiral arms wind in both directions, which makes it nearly impossible for astronomers to determine the rotation of the galaxy. Astronomers theorize that a collision with a smaller companion galaxy near the core of the main galaxy could have led to the unusual spiral structure. NGC 5253, a peculiar irregular galaxy, is located near the border with Hydra and M83, with which it likely had a close gravitational interaction 1–2 billion years ago. This may have sparked the galaxy's high rate of star formation, which continues today and contributes to its high surface brightness. NGC 5253 includes a large nebula and at least 12 large star clusters. In the eyepiece, it is a small galaxy of magnitude 10 with dimensions of 5 arcminutes by 2 arcminutes and a bright nucleus. NGC 4945 is a spiral galaxy seen edge-on from Earth, 13 million light-years away. It is visible with any amateur telescope, as well as binoculars under good conditions; it has been described as "shaped like a candle flame", being long and thin (16' by 3'). In the eyepiece of a large telescope, its southeastern dust lane becomes visible. Another galaxy is NGC 5102, found by star-hopping from Iota Centauri. In the eyepiece, it appears as an elliptical object 9 arcminutes by 2.5 arcminutes tilted on a southwest–northeast axis.
One of the closest active galaxies to Earth is the Centaurus A galaxy, NGC 5128, at 11 million light-years away (redshift 0.00183). It has a supermassive black hole at its core, which expels massive jets of matter that emit radio waves due to synchrotron radiation. Astronomers posit that its dust lanes, not common in elliptical galaxies, are due to a previous merger with another galaxy, probably a spiral galaxy. NGC 5128 appears in the optical spectrum as a fairly large elliptical galaxy with a prominent dust lane. Its overall magnitude is 7.0 and it has been seen under perfect conditions with the naked eye, making it one of the most distant objects visible to the unaided observer. In equatorial and southern latitudes, it is easily found by star hopping from Omega Centauri. In small telescopes, the dust lane is not visible; it begins to appear with about 4 inches of aperture under good conditions. In large amateur instruments, above about 12 inches in aperture, the dust lane's west-northwest to east-southeast direction is easily discerned. Another dim dust lane on the east side of the 12-arcminute-by-15-arcminute galaxy is also visible. ESO 270-17, also called the Fourcade-Figueroa Object, is a low-surface brightness object believed to be the remnants of a galaxy; it does not have a core and is very difficult to observe with an amateur telescope. It measures 7 arcminutes by 1 arcminute. It likely originated as a spiral galaxy and underwent a catastrophic gravitational interaction with Centaurus A around 500 million years ago, stopping its rotation and destroying its structure.
NGC 4650A is a polar-ring galaxy 136 million light-years from Earth (redshift 0.01). It has a central core made of older stars that resembles an elliptical galaxy, and an outer ring of young stars that orbits around the core. The plane of the outer ring is distorted, which suggests that NGC 4650A is the result of a galaxy collision about a billion years ago. This galaxy has also been cited in studies of dark matter, because the stars in the outer ring orbit too quickly for their collective mass. This suggests that the galaxy is surrounded by a dark matter halo, which provides the necessary mass.
One of the closest galaxy clusters to Earth is the Centaurus Cluster at 160 million light-years away, having redshift 0.0114. It has a cooler, denser central region of gas and a hotter, more diffuse outer region. The intracluster medium in the Centaurus Cluster has a high concentration of metals (elements heavier than helium) due to a large number of supernovae. This cluster also possesses a plume of gas whose origin is unknown.
History
While Centaurus now has a high southern latitude, at the dawn of civilization it was an equatorial constellation. Precession has been slowly shifting it southward for millennia, and it is now close to its maximal southern declination. In a little over 7000 years it will be at maximum visibility for those in the northern hemisphere, visible at times in the year up to quite a high northern latitude.
The figure of Centaurus can be traced back to a Babylonian constellation known as the Bison-man (MUL.GUD.ALIM). This being was depicted in two major forms: firstly, as a 4-legged bison with a human head, and secondly, as a being with a man's head and torso attached to the rear legs and tail of a bull or bison. It has been closely associated with the Sun god Utu-Shamash from very early times.
The Greeks depicted the constellation as a centaur and gave it its current name. It was mentioned by Eudoxus in the 4th century BC and Aratus in the 3rd century BC. In the 2nd century AD, Claudius Ptolemy catalogued 37 stars in Centaurus, including Alpha Centauri. Large as it is now, in earlier times it was even larger, as the constellation Lupus was treated as an asterism within Centaurus, portrayed in illustrations as an unspecified animal either in the centaur's grasp or impaled on its spear. The Southern Cross, which is now regarded as a separate constellation, was treated by the ancients as a mere asterism formed of the stars composing the centaur's legs. Additionally, what is now the minor constellation Circinus was treated as undefined stars under the centaur's front hooves.
According to the Roman poet Ovid (Fasti v.379), the constellation honors the centaur Chiron, who was tutor to many of the earlier Greek heroes including Heracles (Hercules), Theseus, and Jason, the leader of the Argonauts. It is not to be confused with the more warlike centaur represented by the zodiacal constellation Sagittarius. The legend associated with Chiron says that he was accidentally poisoned with an arrow shot by Hercules, and was subsequently placed in the heavens.
Equivalents
In Chinese astronomy, the stars of Centaurus are found in three areas: the Azure Dragon of the East (東方青龍, Dōng Fāng Qīng Lóng), the Vermillion Bird of the South (南方朱雀, Nán Fāng Zhū Què), and the Southern Asterisms (近南極星區, Jìnnánjíxīngōu). Not all of the stars of Centaurus can be seen from China, and the unseen stars were classified among the Southern Asterisms by Xu Guangqi, based on his study of western star charts. However, most of the brightest stars of Centaurus, including α Centauri, θ Centauri (or Menkent), ε Centauri and η Centauri, can be seen in the Chinese sky.
Some Polynesian peoples considered the stars of Centaurus to be a constellation as well. On Pukapuka, Centaurus had two names: Na Mata-o-te-tokolua and Na Lua-mata-o-Wua-ma-Velo. In Tonga, the constellation was called by four names: O-nga-tangata, Tautanga-ufi, Mamangi-Halahu, and Mau-kuo-mau. Alpha and Beta Centauri were not named specifically by the people of Pukapuka or Tonga, but they were named by the people of Hawaii and the Tuamotus. In Hawaii, the name for Alpha Centauri was either Melemele or Ka Maile-hope and the name for Beta Centauri was either Polapola or Ka Maile-mua. In the Tuamotu islands, Alpha was called Na Kuhi and Beta was called Tere.
The Pointer (α Centauri and β Centauri) is one of the asterisms used by Bugis sailors for navigation, called bintoéng balué, meaning "the widowed-before-marriage". It is also called bintoéng sallatang meaning "southern star".
Namesakes
Two United States Navy ships, and , were named after Centaurus, the constellation.
| Physical sciences | Constellations | null |
6416 | https://en.wikipedia.org/wiki/Impact%20crater | Impact crater | An impact crater is a depression in the surface of a solid astronomical body formed by the hypervelocity impact of a smaller object. In contrast to volcanic craters, which result from explosion or internal collapse, impact craters typically have raised rims and floors that are lower in elevation than the surrounding terrain. Impact craters are typically circular, though they can be elliptical in shape or even irregular due to events such as landslides. Impact craters range in size from microscopic craters seen on lunar rocks returned by the Apollo Program to simple bowl-shaped depressions and vast, complex, multi-ringed impact basins. Meteor Crater is a well-known example of a small impact crater on Earth.
Impact craters are the dominant geographic features on many solid Solar System objects including the Moon, Mercury, Callisto, Ganymede, and most small moons and asteroids. On other planets and moons that experience more active surface geological processes, such as Earth, Venus, Europa, Io, Titan, and Triton, visible impact craters are less common because they become eroded, buried, or transformed by tectonic and volcanic processes over time. Where such processes have destroyed most of the original crater topography, the terms impact structure or astrobleme are more commonly used. In early literature, before the significance of impact cratering was widely recognised, the terms cryptoexplosion or cryptovolcanic structure were often used to describe what are now recognised as impact-related features on Earth.
The cratering records of very old surfaces, such as Mercury, the Moon, and the southern highlands of Mars, record a period of intense early bombardment in the inner Solar System around 3.9 billion years ago. The rate of crater production on Earth has since been considerably lower, but it is appreciable nonetheless. Earth experiences, on average, from one to three impacts large enough to produce a crater every million years. This indicates that there should be far more relatively young craters on the planet than have been discovered so far. The cratering rate in the inner solar system fluctuates as a consequence of collisions in the asteroid belt that create a family of fragments that are often sent cascading into the inner solar system. Formed in a collision 80 million years ago, the Baptistina family of asteroids is thought to have caused a large spike in the impact rate. The rate of impact cratering in the outer Solar System could be different from the inner Solar System.
Although Earth's active surface processes quickly destroy the impact record, about 190 terrestrial impact craters have been identified. These range in diameter from a few tens of meters up to about , and they range in age from recent times (e.g. the Sikhote-Alin craters in Russia whose creation was witnessed in 1947) to more than two billion years, though most are less than 500 million years old because geological processes tend to obliterate older craters. They are also selectively found in the stable interior regions of continents. Few undersea craters have been discovered because of the difficulty of surveying the sea floor, the rapid rate of change of the ocean bottom, and the subduction of the ocean floor into Earth's interior by processes of plate tectonics.
History
Daniel M. Barringer, a mining engineer, was convinced already in 1903 that the crater he owned, Meteor Crater, was of cosmic origin. Most geologists at the time assumed it formed as the result of a volcanic steam eruption.
In the 1920s, the American geologist Walter H. Bucher studied a number of sites now recognized as impact craters in the United States. He concluded they had been created by some great explosive event, but believed that this force was probably volcanic in origin. However, in 1936, the geologists John D. Boon and Claude C. Albritton Jr. revisited Bucher's studies and concluded that the craters that he studied were probably formed by impacts.
Grove Karl Gilbert suggested in 1893 that the Moon's craters were formed by large asteroid impacts. Ralph Baldwin in 1949 wrote that the Moon's craters were mostly of impact origin. Around 1960, Gene Shoemaker revived the idea. According to David H. Levy, Shoemaker "saw the craters on the Moon as logical impact sites that were formed not gradually, in eons, but explosively, in seconds." For his PhD degree at Princeton University (1960), under the guidance of Harry Hammond Hess, Shoemaker studied the impact dynamics of Meteor Crater. Shoemaker noted that Meteor Crater had the same form and structure as two explosion craters created from atomic bomb tests at the Nevada Test Site, notably Jangle U in 1951 and Teapot Ess in 1955. In 1960, Edward C. T. Chao and Shoemaker identified coesite (a form of silicon dioxide) at Meteor Crater, proving the crater was formed from an impact generating extremely high temperatures and pressures. They followed this discovery with the identification of coesite within suevite at Nördlinger Ries, proving its impact origin.
Armed with the knowledge of shock-metamorphic features, Carlyle S. Beals and colleagues at the Dominion Astrophysical Observatory in Victoria, British Columbia, Canada and Wolf von Engelhardt of the University of Tübingen in Germany began a methodical search for impact craters. By 1970, they had tentatively identified more than 50. Although their work was controversial, the American Apollo Moon landings, which were in progress at the time, provided supportive evidence by recognizing the rate of impact cratering on the Moon. Because the processes of erosion on the Moon are minimal, craters persist. Since the Earth could be expected to have roughly the same cratering rate as the Moon, it became clear that the Earth had suffered far more impacts than could be seen by counting evident craters.
Crater formation
Impact cratering involves high velocity collisions between solid objects, typically much greater than the speed of sound in those objects. Such hyper-velocity impacts produce physical effects such as melting and vaporization that do not occur in familiar sub-sonic collisions. On Earth, ignoring the slowing effects of travel through the atmosphere, the lowest impact velocity with an object from space is equal to the gravitational escape velocity of about 11 km/s. The fastest impacts occur at about 72 km/s in the "worst case" scenario in which an object in a retrograde near-parabolic orbit hits Earth. The median impact velocity on Earth is about 20 km/s.
However, the slowing effects of travel through the atmosphere rapidly decelerate any potential impactor, especially in the lowest 12 kilometres where 90% of the Earth's atmospheric mass lies. Meteors of up to 7,000 kg lose all their cosmic velocity due to atmospheric drag at a certain altitude (retardation point), and start to accelerate again due to Earth's gravity until the body reaches its terminal velocity of 0.09 to 0.16 km/s. The larger the meteoroid (i.e. asteroids and comets) the more of its initial cosmic velocity it preserves. While an object of 9,000 kg maintains about 6% of its original velocity, one of 900,000 kg already preserves about 70%. Extremely large bodies (about 100,000 tonnes) are not slowed by the atmosphere at all, and impact with their initial cosmic velocity if no prior disintegration occurs.
Impacts at these high speeds produce shock waves in solid materials, and both impactor and the material impacted are rapidly compressed to high density. Following initial compression, the high-density, over-compressed region rapidly depressurizes, exploding violently, to set in train the sequence of events that produces the impact crater. Impact-crater formation is therefore more closely analogous to cratering by high explosives than by mechanical displacement. Indeed, the energy density of some material involved in the formation of impact craters is many times higher than that generated by high explosives. Since craters are caused by explosions, they are nearly always circular – only very low-angle impacts cause significantly elliptical craters.
This describes impacts on solid surfaces. Impacts on porous surfaces, such as that of Hyperion, may produce internal compression without ejecta, punching a hole in the surface without filling in nearby craters. This may explain the 'sponge-like' appearance of that moon.
It is convenient to divide the impact process conceptually into three distinct stages: (1) initial contact and compression, (2) excavation, (3) modification and collapse. In practice, there is overlap between the three processes with, for example, the excavation of the crater continuing in some regions while modification and collapse is already underway in others.
Contact and compression
In the absence of atmosphere, the impact process begins when the impactor first touches the target surface. This contact accelerates the target and decelerates the impactor. Because the impactor is moving so rapidly, the rear of the object moves a significant distance during the short-but-finite time taken for the deceleration to propagate across the impactor. As a result, the impactor is compressed, its density rises, and the pressure within it increases dramatically. Peak pressures in large impacts exceed 1 T Pa to reach values more usually found deep in the interiors of planets, or generated artificially in nuclear explosions.
In physical terms, a shock wave originates from the point of contact. As this shock wave expands, it decelerates and compresses the impactor, and it accelerates and compresses the target. Stress levels within the shock wave far exceed the strength of solid materials; consequently, both the impactor and the target close to the impact site are irreversibly damaged. Many crystalline minerals can be transformed into higher-density phases by shock waves; for example, the common mineral quartz can be transformed into the higher-pressure forms coesite and stishovite. Many other shock-related changes take place within both impactor and target as the shock wave passes through, and some of these changes can be used as diagnostic tools to determine whether particular geological features were produced by impact cratering.
As the shock wave decays, the shocked region decompresses towards more usual pressures and densities. The damage produced by the shock wave raises the temperature of the material. In all but the smallest impacts this increase in temperature is sufficient to melt the impactor, and in larger impacts to vaporize most of it and to melt large volumes of the target. As well as being heated, the target near the impact is accelerated by the shock wave, and it continues moving away from the impact behind the decaying shock wave.
Excavation
Contact, compression, decompression, and the passage of the shock wave all occur within a few tenths of a second for a large impact. The subsequent excavation of the crater occurs more slowly, and during this stage the flow of material is largely subsonic. During excavation, the crater grows as the accelerated target material moves away from the point of impact. The target's motion is initially downwards and outwards, but it becomes outwards and upwards. The flow initially produces an approximately hemispherical cavity that continues to grow, eventually producing a paraboloid (bowl-shaped) crater in which the centre has been pushed down, a significant volume of material has been ejected, and a topographically elevated crater rim has been pushed up. When this cavity has reached its maximum size, it is called the transient cavity.
The depth of the transient cavity is typically a quarter to a third of its diameter. Ejecta thrown out of the crater do not include material excavated from the full depth of the transient cavity; typically the depth of maximum excavation is only about a third of the total depth. As a result, about one third of the volume of the transient crater is formed by the ejection of material, and the remaining two thirds is formed by the displacement of material downwards, outwards and upwards, to form the elevated rim. For impacts into highly porous materials, a significant crater volume may also be formed by the permanent compaction of the pore space. Such compaction craters may be important on many asteroids, comets and small moons.
In large impacts, as well as material displaced and ejected to form the crater, significant volumes of target material may be melted and vaporized together with the original impactor. Some of this impact melt rock may be ejected, but most of it remains within the transient crater, initially forming a layer of impact melt coating the interior of the transient cavity. In contrast, the hot dense vaporized material expands rapidly out of the growing cavity, carrying some solid and molten material within it as it does so. As this hot vapor cloud expands, it rises and cools much like the archetypal mushroom cloud generated by large nuclear explosions. In large impacts, the expanding vapor cloud may rise to many times the scale height of the atmosphere, effectively expanding into free space.
Most material ejected from the crater is deposited within a few crater radii, but a small fraction may travel large distances at high velocity, and in large impacts it may exceed escape velocity and leave the impacted planet or moon entirely. The majority of the fastest material is ejected from close to the center of impact, and the slowest material is ejected close to the rim at low velocities to form an overturned coherent flap of ejecta immediately outside the rim. As ejecta escapes from the growing crater, it forms an expanding curtain in the shape of an inverted cone. The trajectory of individual particles within the curtain is thought to be largely ballistic.
Small volumes of un-melted and relatively un-shocked material may be spalled at very high relative velocities from the surface of the target and from the rear of the impactor. Spalling provides a potential mechanism whereby material may be ejected into inter-planetary space largely undamaged, and whereby small volumes of the impactor may be preserved undamaged even in large impacts. Small volumes of high-speed material may also be generated early in the impact by jetting. This occurs when two surfaces converge rapidly and obliquely at a small angle, and high-temperature highly shocked material is expelled from the convergence zone with velocities that may be several times larger than the impact velocity.
Modification and collapse
In most circumstances, the transient cavity is not stable and collapses under gravity. In small craters, less than about 4 km diameter on Earth, there is some limited collapse of the crater rim coupled with debris sliding down the crater walls and drainage of impact melts into the deeper cavity. The resultant structure is called a simple crater, and it remains bowl-shaped and superficially similar to the transient crater. In simple craters, the original excavation cavity is overlain by a lens of collapse breccia, ejecta and melt rock, and a portion of the central crater floor may sometimes be flat.
Above a certain threshold size, which varies with planetary gravity, the collapse and modification of the transient cavity is much more extensive, and the resulting structure is called a complex crater. The collapse of the transient cavity is driven by gravity, and involves both the uplift of the central region and the inward collapse of the rim. The central uplift is not the result of elastic rebound, which is a process in which a material with elastic strength attempts to return to its original geometry; rather the collapse is a process in which a material with little or no strength attempts to return to a state of gravitational equilibrium.
Complex craters have uplifted centers, and they have typically broad flat shallow crater floors, and terraced walls. At the largest sizes, one or more exterior or interior rings may appear, and the structure may be labeled an impact basin rather than an impact crater. Complex-crater morphology on rocky planets appears to follow a regular sequence with increasing size: small complex craters with a central topographic peak are called central peak craters, for example Tycho; intermediate-sized craters, in which the central peak is replaced by a ring of peaks, are called peak-ring craters, for example Schrödinger; and the largest craters contain multiple concentric topographic rings, and are called multi-ringed basins, for example Orientale. On icy (as opposed to rocky) bodies, other morphological forms appear that may have central pits rather than central peaks, and at the largest sizes may contain many concentric rings. Valhalla on Callisto is an example of this type.
Subsequent modification
Long after an impact event, a crater may be further modified by erosion, mass wasting processes, viscous relaxation, or erased entirely. These effects are most prominent on geologically and meteorologically active bodies such as Earth, Titan, Triton, and Io. However, heavily modified craters may be found on more primordial bodies such as Callisto, where many ancient craters flatten into bright ghost craters, or palimpsests.
Identifying impact craters
Non-explosive volcanic craters can usually be distinguished from impact craters by their irregular shape and the association of volcanic flows and other volcanic materials. Impact craters produce melted rocks as well, but usually in smaller volumes with different characteristics.
The distinctive mark of an impact crater is the presence of rock that has undergone shock-metamorphic effects, such as shatter cones, melted rocks, and crystal deformations. The problem is that these materials tend to be deeply buried, at least for simple craters. They tend to be revealed in the uplifted center of a complex crater, however.
Impacts produce distinctive shock-metamorphic effects that allow impact sites to be distinctively identified. Such shock-metamorphic effects can include:
A layer of shattered or "brecciated" rock under the floor of the crater. This layer is called a "breccia lens".
Shatter cones, which are chevron-shaped impressions in rocks. Such cones are formed most easily in fine-grained rocks.
High-temperature rock types, including laminated and welded blocks of sand, spherulites and tektites, or glassy spatters of molten rock. The impact origin of tektites has been questioned by some researchers; they have observed some volcanic features in tektites not found in impactites. Tektites are also drier (contain less water) than typical impactites. While rocks melted by the impact resemble volcanic rocks, they incorporate unmelted fragments of bedrock, form unusually large and unbroken fields, and have a much more mixed chemical composition than volcanic materials spewed up from within the Earth. They also may have relatively large amounts of trace elements that are associated with meteorites, such as nickel, platinum, iridium, and cobalt. Note: scientific literature has reported that some "shock" features, such as small shatter cones, which are often associated only with impact events, have been found also in terrestrial volcanic ejecta.
Microscopic pressure deformations of minerals. These include fracture patterns in crystals of quartz and feldspar, and formation of high-pressure materials such as diamond, derived from graphite and other carbon compounds, or stishovite and coesite, varieties of shocked quartz.
Buried craters, such as the Decorah crater, can be identified through drill coring, aerial electromagnetic resistivity imaging, and airborne gravity gradiometry.
Economic importance
On Earth, impact craters have resulted in useful minerals. Some of the ores produced from impact related effects on Earth include ores of iron, uranium, gold, copper, and nickel. It is estimated that the value of materials mined from impact structures is five billion dollars/year just for North America. The eventual usefulness of impact craters depends on several factors, especially the nature of the materials that were impacted and when the materials were affected. In some cases, the deposits were already in place and the impact brought them to the surface. These are called "progenetic economic deposits." Others were created during the actual impact. The great energy involved caused melting. Useful minerals formed as a result of this energy are classified as "syngenetic deposits." The third type, called "epigenetic deposits," is caused by the creation of a basin from the impact. Many of the minerals that our modern lives depend on are associated with impacts in the past. The Vredeford Dome in the center of the Witwatersrand Basin is the largest goldfield in the world, which has supplied about 40% of all the gold ever mined in an impact structure (though the gold did not come from the bolide). The asteroid that struck the region was wide. The Sudbury Basin was caused by an impacting body over in diameter. This basin is famous for its deposits of nickel, copper, and platinum group elements. An impact was involved in making the Carswell structure in Saskatchewan, Canada; it contains uranium deposits.
Hydrocarbons are common around impact structures. Fifty percent of impact structures in North America in hydrocarbon-bearing sedimentary basins contain oil/gas fields.
Lists of craters
Impact craters on Earth
On Earth, the recognition of impact craters is a branch of geology, and is related to planetary geology in the study of other worlds. Out of many proposed craters, relatively few are confirmed. The following twenty are a sample of articles of confirmed and well-documented impact sites.
See the Earth Impact Database, a website concerned with 190 () scientifically confirmed impact craters on Earth.
Some extraterrestrial craters
Caloris Basin (Mercury)
Hellas Basin (Mars)
Herschel crater (Mimas)
Mare Orientale (Moon)
Petrarch crater (Mercury)
South Pole – Aitken basin (Moon)
Largest named craters in the Solar System
North Polar Basin/Borealis Basin (disputed) – Mars – Diameter: 10,600 km
South Pole-Aitken basin – Moon – Diameter: 2,500 km
Hellas Basin – Mars – Diameter: 2,100 km
Caloris Basin – Mercury – Diameter: 1,550 km
Sputnik Planitia – Pluto – Diameter: 1,300 km
Imbrium Basin – Moon – Diameter: 1,100 km
Isidis Planitia – Mars – Diameter: 1,100 km
Mare Tranquilitatis – Moon – Diameter: 870 km
Argyre Planitia – Mars – Diameter: 800 km
Rembrandt – Mercury – Diameter: 715 km
Serenitatis Basin – Moon – Diameter: 700 km
Mare Nubium – Moon – Diameter: 700 km
Beethoven – Mercury – Diameter: 625 km
Valhalla – Callisto – Diameter: 600 km, with rings to 4,000 km diameter
Hertzsprung – Moon – Diameter: 590 km
Turgis – Iapetus – Diameter: 580 km
Apollo – Moon – Diameter: 540 km
Engelier – Iapetus – Diameter: 504 km
Mamaldi – Rhea – Diameter: 480 km
Huygens – Mars – Diameter: 470 km
Schiaparelli – Mars – Diameter: 470 km
Rheasilvia – 4 Vesta – Diameter: 460 km
Gerin – Iapetus – Diameter: 445 km
Odysseus – Tethys – Diameter: 445 km
Korolev – Moon – Diameter: 430 km
Falsaron – Iapetus – Diameter: 424 km
Dostoevskij – Mercury – Diameter: 400 km
Menrva – Titan – Diameter: 392 km
Tolstoj – Mercury – Diameter: 390 km
Goethe – Mercury – Diameter: 380 km
Malprimis – Iapetus – Diameter: 377 km
Tirawa – Rhea – Diameter: 360 km
Orientale Basin – Moon – Diameter: 350 km, with rings to 930 km diameter
Evander – Dione – Diameter: 350 km
Epigeus – Ganymede – Diameter: 343 km
Gertrude – Titania – Diameter: 326 km
Telemus – Tethys – Diameter: 320 km
Asgard – Callisto – Diameter: 300 km, with rings to 1,400 km diameter
Vredefort impact structure – Earth – Diameter: 300 km
Burney – Pluto – Diameter: 296 km
There are approximately twelve more impact craters/basins larger than 300 km on the Moon, five on Mercury, and four on Mars. Large basins, some unnamed but mostly smaller than 300 km, can also be found on Saturn's moons Dione, Rhea and Iapetus.
| Physical sciences | Geology | null |
6420 | https://en.wikipedia.org/wiki/Corona%20Borealis | Corona Borealis | Corona Borealis is a small constellation in the Northern Celestial Hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Its brightest stars form a semicircular arc. Its Latin name, inspired by its shape, means "northern crown". In classical mythology Corona Borealis generally represented the crown given by the god Dionysus to the Cretan princess Ariadne and set by her in the heavens. Other cultures likened the pattern to a circle of elders, an eagle's nest, a bear's den or a smokehole. Ptolemy also listed a southern counterpart, Corona Australis, with a similar pattern.
The brightest star is the magnitude 2.2 Alpha Coronae Borealis. The yellow supergiant R Coronae Borealis is the prototype of a rare class of giant stars—the R Coronae Borealis variables—that are extremely hydrogen deficient, and thought to result from the merger of two white dwarfs. T Coronae Borealis, also known as the Blaze Star, is another unusual type of variable star known as a recurrent nova. Normally of magnitude 10, it last flared up to magnitude 2 in 1946, and is predicted to do the same in 2025. ADS 9731 and Sigma Coronae Borealis are multiple star systems with six and five components respectively. Five stars in the constellation host Jupiter-sized exoplanets. Abell 2065 is a highly concentrated galaxy cluster one billion light-years from the Solar System containing more than 400 members, and is itself part of the larger Corona Borealis Supercluster.
Characteristics
Covering 179 square degrees and hence 0.433% of the sky, Corona Borealis ranks 73rd of the IAU designated constellations by area. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 50°S. It is bordered by Boötes to the north and west, Serpens Caput to the south, and Hercules to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrB". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of eight segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 39.71° and 25.54°. It has a counterpart—Corona Australis—in the Southern Celestial Hemisphere.
Features
Stars
The seven stars that make up the constellation's distinctive crown-shaped pattern are all 4th-magnitude stars except for the brightest of them, Alpha Coronae Borealis. The other six stars are Theta, Beta, Gamma, Delta, Epsilon and Iota Coronae Borealis. The German cartographer Johann Bayer gave twenty stars in Corona Borealis Bayer designations from Alpha to Upsilon in his 1603 star atlas Uranometria. Zeta Coronae Borealis was noted to be a double star by later astronomers and its components designated Zeta1 and Zeta2. John Flamsteed did likewise with Nu Coronae Borealis; classed by Bayer as a single star, it was noted to be two close stars by Flamsteed. He named them 20 and 21 Coronae Borealis in his catalogue, alongside the designations Nu1 and Nu2 respectively. Chinese astronomers deemed nine stars to make up the asterism, adding Pi and Rho Coronae Borealis. Within the constellation's borders, there are 37 stars brighter than or equal to apparent magnitude 6.5.
Alpha Coronae Borealis (officially named Alphecca by the IAU, but sometimes also known as Gemma) appears as a blue-white star of magnitude 2.2. In fact, it is an Algol-type eclipsing binary that varies by 0.1 magnitude with a period of 17.4 days. The primary is a white main-sequence star of spectral type A0V that is 2.91 times the mass of the Sun () and 57 times as luminous (), and is surrounded by a debris disk out to a radius of around 60 astronomical units (AU). The secondary companion is a yellow main-sequence star of spectral type G5V that is a little smaller (0.9 times) the diameter of the Sun. Lying 75±0.5 light-years from Earth, Alphecca is believed to be a member of the Ursa Major Moving Group of stars that have a common motion through space.
Located 112±3 light-years away, Beta Coronae Borealis or Nusakan is a spectroscopic binary system whose two components are separated by 10 AU and orbit each other every 10.5 years. The brighter component is a rapidly oscillating Ap star, pulsating with a period of 16.2 minutes. Of spectral type A5V with a surface temperature of around 7980 K, it has around , 2.6 solar radii (), and . The smaller star is of spectral type F2V with a surface temperature of around 6750 K, and has around , , and between 4 and . Near Nusakan is Theta Coronae Borealis, a binary system that shines with a combined magnitude of 4.13 located 380±20 light-years distant. The brighter component, Theta Coronae Borealis A, is a blue-white star that spins extremely rapidly—at a rate of around 393 km per second. A Be star, it is surrounded by a debris disk.
Flanking Alpha to the east is Gamma Coronae Borealis, yet another binary star system, whose components orbit each other every 92.94 years and are roughly as far apart from each other as the Sun and Neptune. The brighter component has been classed as a Delta Scuti variable star, though this view is not universal. The components are main sequence stars of spectral types B9V and A3V. Located 170±2 light-years away, 4.06-magnitude Delta Coronae Borealis is a yellow giant star of spectral type G3.5III that is around and has swollen to . It has a surface temperature of 5180 K. For most of its existence, Delta Coronae Borealis was a blue-white main-sequence star of spectral type B before it ran out of hydrogen fuel in its core. Its luminosity and spectrum suggest it has just crossed the Hertzsprung gap, having finished burning core hydrogen and just begun burning hydrogen in a shell that surrounds the core.
Zeta Coronae Borealis is a double star with two blue-white components 6.3 arcseconds apart that can be readily separated at 100x magnification. The primary is of magnitude 5.1 and the secondary is of magnitude 6.0. Nu Coronae Borealis is an optical double, whose components are a similar distance from Earth but have different radial velocities, hence are assumed to be unrelated. The primary, Nu1 Coronae Borealis, is a red giant of spectral type M2III and magnitude 5.2, lying 640±30 light-years distant, and the secondary, Nu2 Coronae Borealis, is an orange-hued giant star of spectral type K5III and magnitude 5.4, estimated to be 590±30 light-years away. Sigma Coronae Borealis, on the other hand, is a true multiple star system divisible by small amateur telescopes. It is actually a complex system composed of two stars around as massive as the Sun that orbit each other every 1.14 days, orbited by a third Sun-like star every 726 years. The fourth and fifth components are a binary red dwarf system that is 14,000 AU distant from the other three stars. ADS 9731 is an even rarer multiple system in the constellation, composed of six stars, two of which are spectroscopic binaries.
Corona Borealis is home to two remarkable variable stars. T Coronae Borealis is a cataclysmic variable star also known as the Blaze Star. Normally placid around magnitude 10—it has a minimum of 10.2 and maximum of 9.9—it brightens to magnitude 2 in a period of hours, caused by a nuclear chain reaction and the subsequent explosion. T Coronae Borealis is one of a handful of stars called recurrent novae, which include T Pyxidis and U Scorpii. An outburst of T Coronae Borealis was first recorded in 1866; its second recorded outburst was in February 1946. T Coronae Borealis started dimming in March 2023 and it is known that before it goes nova it dims for about a year; for this reason it is expected to go nova at any time between March and September, 2024. T Coronae Borealis is a binary star with a red-hued giant primary and a white dwarf secondary, the two stars orbiting each other over a period of approximately 8 months. R Coronae Borealis is a yellow-hued variable supergiant star, over 7000 light-years from Earth, and prototype of a class of stars known as R Coronae Borealis variables. Normally of magnitude 6, its brightness periodically drops as low as magnitude 15 and then slowly increases over the next several months. These declines in magnitude come about as dust that has been ejected from the star obscures it. Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 AU from the star, corresponding with a stream of fine dust (composed of grains 5 nm in diameter) associated with the star's stellar wind and coarser dust (composed of grains with a diameter of around 0.14 μm) ejected periodically.
There are several other variables of reasonable brightness for amateur astronomer to observe, including three Mira-type long period variables: S Coronae Borealis ranges between magnitudes 5.8 and 14.1 over a period of 360 days. Located around 1946 light-years distant, it shines with a luminosity 16,643 times that of the Sun and has a surface temperature of 3033 K. One of the reddest stars in the sky, V Coronae Borealis is a cool star with a surface temperature of 2877 K that shines with a luminosity 102,831 times that of the Sun and is a remote 8810 light-years distant from Earth. Varying between magnitudes 6.9 and 12.6 over a period of 357 days, it is located near the junction of the border of Corona Borealis with Hercules and Bootes. Located 1.5° northeast of Tau Coronae Borealis, W Coronae Borealis ranges between magnitudes 7.8 and 14.3 over a period of 238 days. Another red giant, RR Coronae Borealis is a M3-type semiregular variable star that varies between magnitudes 7.3 and 8.2 over 60.8 days. RS Coronae Borealis is yet another semiregular variable red giant, which ranges between magnitudes 8.7 to 11.6 over 332 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). Meanwhile, U Coronae Borealis is an Algol-type eclipsing binary star system whose magnitude varies between 7.66 and 8.79 over a period of 3.45 days
TY Coronae Borealis is a pulsating white dwarf (of ZZ Ceti) type, which is around 70% as massive as the Sun, yet has only 1.1% of its diameter. Discovered in 1990, UW Coronae Borealis is a low-mass X-ray binary system composed of a star less massive than the Sun and a neutron star surrounded by an accretion disk that draws material from the companion star. It varies in brightness in an unusually complex manner: the two stars orbit each other every 111 minutes, yet there is another cycle of 112.6 minutes, which corresponds to the orbit of the disk around the degenerate star. The beat period of 5.5 days indicates the time the accretion disk—which is asymmetrical—takes to precess around the star.
Extrasolar planetary systems
Extrasolar planets have been confirmed in five star systems, four of which were found by the radial velocity method. The spectrum of Epsilon Coronae Borealis was analysed for seven years from 2005 to 2012, revealing a planet around 6.7 times as massive as Jupiter () orbiting every 418 days at an average distance of around 1.3 AU. Epsilon itself is a orange giant of spectral type K2III that has swollen to and . Kappa Coronae Borealis is a spectral type K1IV orange subgiant nearly twice as massive as the Sun; around it lies a dust debris disk, and one planet with a period of 3.4 years. This planet's mass is estimated at . The dimensions of the debris disk indicate it is likely there is a second substellar companion. Omicron Coronae Borealis is a K-type clump giant with one confirmed planet with a mass of that orbits every 187 days—one of the two least massive planets known around clump giants. HD 145457 is an orange giant of spectral type K0III found to have one planet of . Discovered by the Doppler method in 2010, it takes 176 days to complete an orbit. XO-1 is a magnitude 11 yellow main-sequence star located approximately light-years away, of spectral type G1V with a mass and radius similar to the Sun. In 2006 the hot Jupiter exoplanet XO-1b was discovered orbiting XO-1 by the transit method using the XO Telescope. Roughly the size of Jupiter, it completes an orbit around its star every three days.
The discovery of a Jupiter-sized planetary companion was announced in 1997 via analysis of the radial velocity of Rho Coronae Borealis, a yellow main sequence star and Solar analog of spectral type G0V, around 57 light-years distant from Earth. More accurate measurement of data from the Hipparcos satellite subsequently showed it instead to be a low-mass star somewhere between 100 and 200 times the mass of Jupiter. Possible stable planetary orbits in the habitable zone were calculated for the binary star Eta Coronae Borealis, which is composed of two stars—yellow main sequence stars of spectral type G1V and G3V respectively—similar in mass and spectrum to the Sun. No planet has been found, but a brown dwarf companion about 63 times as massive as Jupiter with a spectral type of L8 was discovered at a distance of 3640 AU from the pair in 2001.
Deep-sky objects
Corona Borealis contains few galaxies observable with amateur telescopes. NGC 6085 and 6086 are a faint spiral and elliptical galaxy respectively close enough to each other to be seen in the same visual field through a telescope. Abell 2142 is a huge (six million light-year diameter), X-ray luminous galaxy cluster that is the result of an ongoing merger between two galaxy clusters. It has a redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light-years away. Another galaxy cluster in the constellation, RX J1532.9+3021, is approximately 3.9 billion light-years from Earth. At the cluster's center is a large elliptical galaxy containing one of the most massive and most powerful supermassive black holes yet discovered. Abell 2065 is a highly concentrated galaxy cluster containing more than 400 members, the brightest of which are 16th magnitude; the cluster is more than one billion light-years from Earth. On a larger scale still, Abell 2065, along with Abell 2061, Abell 2067, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Another galaxy cluster, Abell 2162, is a member of the Hercules Superclusters.
Mythology
In Greek mythology, Corona Borealis was linked to the legend of Theseus and the minotaur. It was generally considered to represent a crown given by Dionysus to Ariadne, the daughter of Minos of Crete, after she had been abandoned by the Athenian prince Theseus. When she wore the crown at her marriage to Dionysus, he placed it in the heavens to commemorate their wedding. An alternative version has the besotted Dionysus give the crown to Ariadne, who in turn gives it to Theseus after he arrives in Crete to kill the minotaur that the Cretans have demanded tribute from Athens to feed. The hero uses the crown's light to escape the labyrinth after disposing of the creature, and Dionysus later sets it in the heavens. , attributed to Hyginus, linked it to a crown or wreath worn by Bacchus (Dionysus) to disguise his appearance when first approaching Mount Olympus and revealing himself to the gods, having been previously hidden as yet another child of Jupiter's trysts with a mortal, in this case Semele. Its proximity to the constellations Hercules (which reports was once attributed to Theseus, among others) and Lyra (Theseus' lyre in one account), could indicate that the three constellations were invented as a group. Corona Borealis was one of the 48 constellations mentioned in the Almagest of classical astronomer Ptolemy.
In Mesopotamia, Corona Borealis was associated with the goddess Nanaya.
In Welsh mythology, it was called Caer Arianrhod, "the Castle of the Silver Circle", and was the heavenly abode of the Lady Arianrhod. To the ancient Balts, Corona Borealis was known as Darželis, the "flower garden".
The Arabs called the constellation Alphecca (a name later given to Alpha Coronae Borealis), which means "separated" or "broken up" ( ), a reference to the resemblance of the stars of Corona Borealis to a loose string of jewels. This was also interpreted as a broken dish. Among the Bedouins, the constellation was known as (), or "the dish/bowl of the poor people".
The Skidi people of Native Americans saw the stars of Corona Borealis representing a council of stars whose chief was Polaris. The constellation also symbolised the smokehole over a fireplace, which conveyed their messages to the gods, as well as how chiefs should come together to consider matters of importance. The Shawnee people saw the stars as the Heavenly Sisters, who descended from the sky every night to dance on earth. Alphecca signifies the youngest and most comely sister, who was seized by a hunter who transformed into a field mouse to get close to her. They married though she later returned to the sky, with her heartbroken husband and son following later. The Mi'kmaq of eastern Canada saw Corona Borealis as Mskegwǒm, the den of the celestial bear (Alpha, Beta, Gamma and Delta Ursae Majoris).
Polynesian peoples often recognized Corona Borealis; the people of the Tuamotus named it Na Kaua-ki-tokerau and probably Te Hetu. The constellation was likely called Kaua-mea in Hawaii, Rangawhenua in New Zealand, and Te Wale-o-Awitu in the Cook Islands atoll of Pukapuka. Its name in Tonga was uncertain; it was either called Ao-o-Uvea or Kau-kupenga.
In Australian Aboriginal astronomy, the constellation is called womera ("the boomerang") due to the shape of the stars. The Wailwun people of northwestern New South Wales saw Corona Borealis as mullion wollai "eagle's nest", with Altair and Vega—each called mullion—the pair of eagles accompanying it. The Wardaman people of northern Australia held the constellation to be a gathering point for Men's Law, Women's Law and Law of both sexes come together and consider matters of existence.
Later references
Corona Borealis was renamed Corona Firmiana in honour of the Archbishop of Salzburg in the 1730 Atlas Mercurii Philosophicii Firmamentum Firminianum Descriptionem by Corbinianus Thomas, but this was not taken up by subsequent cartographers. The constellation was featured as a main plot ingredient in the short story "Hypnos" by H. P. Lovecraft, published in 1923; it is the object of fear of one of the protagonists in the short story. Finnish band Cadacross released an album titled Corona Borealis in 2002.
| Physical sciences | Other | Astronomy |
6421 | https://en.wikipedia.org/wiki/Cygnus%20%28constellation%29 | Cygnus (constellation) | Cygnus is a northern constellation on the plane of the Milky Way, deriving its name from the Latinized Greek word for swan. Cygnus is one of the most recognizable constellations of the northern summer and autumn, and it features a prominent asterism known as the Northern Cross (in contrast to the Southern Cross). Cygnus was among the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations.
Cygnus contains Deneb (ذنب, translit. ḏanab, tail)one of the brightest stars in the night sky and the most distant first-magnitude staras its "tail star" and one corner of the Summer Triangle the constellation forming an east pointing altitude of the triangle. It also has some notable X-ray sources and the giant stellar association of Cygnus OB2. One of the stars of this association, NML Cygni, is one of the largest stars currently known. The constellation is also home to Cygnus X-1, a distant X-ray binary containing a supergiant and unseen massive companion that was the first object widely held to be a black hole.
Many star systems in Cygnus have known planets as a result of the Kepler Mission observing one patch of the sky, an area around Cygnus.
Most of the east has part of the Hercules–Corona Borealis Great Wall in the deep sky, a giant galaxy filament that is the largest known structure in the observable universe, covering most of the northern sky.
History and mythology
In Eastern and World astronomy
In Polynesia, Cygnus was often recognized as a separate constellation. In Tonga it was called Tuula-lupe, and in the Tuamotus it was called Fanui-tai. In New Zealand it was called Mara-tea, in the Society Islands it was called Pirae-tea or Taurua-i-te-haapa-raa-manu, and in the Tuamotus it was called Fanui-raro. Beta Cygni was named in New Zealand; it was likely called Whetu-kaupo. Gamma Cygni was called Fanui-runga in the Tuamotus.
Deneb was also often a given name, in the Islamic world of astronomy. The name Deneb comes from the Arabic name dhaneb, meaning "tail", from the phrase Dhanab ad-Dajājah, which means "the tail of the hen".
In Western astronomy
In Greek mythology, Cygnus has been identified with several different legendary swans. Zeus disguised himself as a swan to seduce Leda, Spartan king Tyndareus's wife, who gave birth to the Gemini, Helen of Troy, and Clytemnestra; Orpheus was transformed into a swan after his murder, and was said to have been placed in the sky next to his lyre (Lyra); and a man named Cygnus (Greek for swan) was transformed into his namesake.
Later Romans also associated this constellation with the tragic story of Phaethon, the son of Helios the sun god, who demanded to ride his father's sun chariot for a day. Phaethon, however, was unable to control the reins, forcing Zeus to destroy the chariot (and Phaethon) with a thunderbolt, causing it to plummet to the earth into the river Eridanus. According to the myth, Phaethon's close friend or lover, Cygnus of Liguria, grieved bitterly and spent many days diving into the river to collect Phaethon's bones to give him a proper burial. The gods were so touched by Cygnus's devotion that they turned him into a swan and placed him among the stars.
In Ovid's Metamorphoses, there are three people named Cygnus, all of whom are transformed into swans. Alongside Cygnus, noted above, he mentions a boy from Aetolia who throws himself off a cliff when his companion Phyllius refuses to give him a tamed bull that he demands, but he is transformed into a swan and flies away. He also mentions a son of Poseidon, an invulnerable warrior in the Trojan War who is eventually killed by Achilles, but Poseidon saves him by transforming him into a swan.
Together with other avian constellations near the summer solstice, Vultur cadens and Aquila, Cygnus may be a significant part of the origin of the myth of the Stymphalian Birds, one of The Twelve Labours of Hercules.
Characteristics
A very large constellation, Cygnus is bordered by Cepheus to the north and east, Draco to the north and west, Lyra to the west, Vulpecula to the south, Pegasus to the southeast and Lacerta to the east. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Cyg". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined as a polygon of 28 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 27.73° and 61.36°. Covering 804 square degrees and around 1.9% of the night sky, Cygnus ranks 16th of the 88 constellations in size.
Cygnus culminates at midnight on 29 June, and is most visible in the evening from the early summer to mid-autumn in the Northern Hemisphere.
Normally, Cygnus is depicted with Delta and Epsilon Cygni as its wings. Deneb, the brightest in the constellation is at its tail, and Albireo as the tip of its beak.
There are several asterisms in Cygnus. In the 17th-century German celestial cartographer Johann Bayer's star atlas the Uranometria, Alpha, Beta and Gamma Cygni form the pole of a cross, while Delta and Epsilon form the cross beam. The nova P Cygni was then considered to be the body of Christ.
Features
There is an abundance of deep-sky objects, with many open clusters, nebulae of various types and supernova remnants found in Cygnus due to its position on the Milky Way.
Its molecular clouds form the Cygnus Rift dark nebula constellation, comprising one end of the Great Rift along the Milky Way's galactic plane. The rift begins around the Northern Coalsack, and partially obscures the larger Cygnus molecular cloud complex behind it, which the North America Nebula is part of.
Stars
Bayer catalogued many stars in the constellation, giving them the Bayer designations from Alpha to Omega and then using lowercase Roman letters to g. John Flamsteed added the Roman letters h, i, k, l and m (these stars were considered informes by Bayer as they lay outside the asterism of Cygnus), but were dropped by Francis Baily.
There are several bright stars in Cygnus. α Cygni, called Deneb, is the brightest star in Cygnus. It is a white supergiant star of spectral type A2Iae that varies between magnitudes 1.21 and 1.29, one of the largest and most luminous A-class stars known. It is located about 2600 light-years away. Its traditional name means "tail" and refers to its position in the constellation. Albireo, designated β Cygni, is a celebrated binary star among amateur astronomers for its contrasting hues. The primary is an orange-hued giant star of magnitude 3.1 and the secondary is a blue-green hued star of magnitude 5.1. The system is 430 light-years away and is visible in large binoculars and all amateur telescopes. γ Cygni, traditionally named Sadr, is a yellow-tinged supergiant star of magnitude 2.2, 1800 light-years away. Its traditional name means "breast" and refers to its position in the constellation. δ Cygni (the proper name is Fawaris) is another bright binary star in Cygnus, 166 light-years with a period of 800 years. The primary is a blue-white hued giant star of magnitude 2.9, and the secondary is a star of magnitude 6.6. The two components are visible in a medium-sized amateur telescope. The fifth star in Cygnus above magnitude 3 is Aljanah, designated ε Cygni. It is an orange-hued giant star of magnitude 2.5, 72 light-years from Earth.
There are several other dimmer double and binary stars in Cygnus. μ Cygni is a binary star with an optical tertiary component. The binary system has a period of 790 years and is 73 light-years from Earth. The primary and secondary, both white stars, are of magnitude 4.8 and 6.2, respectively. The unrelated tertiary component is of magnitude 6.9. Though the tertiary component is visible in binoculars, the primary and secondary currently require a medium-sized amateur telescope to split, as they will through the year 2020. The two stars will be closest between 2043 and 2050, when they will require a telescope with larger aperture to split. The stars 30 and 31 Cygni form a contrasting double star similar to the brighter Albireo. The two are visible in binoculars. The primary, 31 Cygni, is an orange-hued star of magnitude 3.8, 1400 light-years from Earth. The secondary, 30 Cygni, appears blue-green. It is of spectral type A5IIIn and magnitude 4.83, and is around 610 light-years from Earth. 31 Cygni itself is a binary star; the tertiary component is a blue star of magnitude 7.0. ψ Cygni is a binary star visible in small amateur telescopes, with two white components. The primary is of magnitude 5.0 and the secondary is of magnitude 7.5. 61 Cygni is a binary star visible in large binoculars or a small amateur telescope. It is 11.4 light-years from Earth and has a period of 750 years. Both components are orange-hued dwarf (main sequence) stars; the primary is of magnitude 5.2 and the secondary is of magnitude 6.1. 61 Cygni is significant because Friedrich Wilhelm Bessel determined its parallax in 1838, the first star to have a known parallax.
Located near η Cygni is the X-ray source Cygnus X-1, which is now thought to be caused by a black hole accreting matter in a binary star system. This was the first X-ray source widely believed to be a black hole. It is located approximately 2.2 kiloparsecs from the Sun. There is also supergiant variable star in the system which is known as HDE 226868.
Cygnus also contains several other noteworthy X-ray sources. Cygnus X-3 is a microquasar containing a Wolf–Rayet star in orbit around a very compact object, with a period of only 4.8 hours. The system is one of the most intrinsically luminous X-ray sources observed. The system undergoes periodic outbursts of unknown nature, and during one such outburst, the system was found to be emitting muons, likely caused by neutrinos. While the compact object is thought to be a neutron star or possibly a black hole, it is possible that the object is instead a more exotic stellar remnant, possibly the first discovered quark star, hypothesized due to its production of cosmic rays that cannot be explained if the object is a normal neutron star. The system also emits cosmic rays and gamma rays, and has helped shed insight on to the formation of such rays. Cygnus X-2 is another X-ray binary, containing an A-type giant in orbit around a neutron star with a 9.8-day period. The system is interesting due to the rather small mass of the companion star, as most millisecond pulsars have much more massive companions. Another black hole in Cygnus is V404 Cygni, which consists of a K-type star orbiting around a black hole of around 12 solar masses. The black hole, similar to that of Cygnus X-3, has been hypothesized to be a quark star. 4U 2129+ 47 is another X-ray binary containing a neutron star which undergoes outbursts, as is EXO 2030+ 375.
Cygnus is also home to several variable stars. SS Cygni is a dwarf nova which undergoes outbursts every 7–8 weeks. The system's total magnitude varies from 12th magnitude at its dimmest to 8th magnitude at its brightest. The two objects in the system are incredibly close together, with an orbital period of less than 0.28 days. χ Cygni is a red giant and the second-brightest Mira variable star at its maximum. It ranges between magnitudes 3.3 and 14.2, and spectral types S6,2e to S10,4e (MSe) over a period of 408 days; it has a diameter of 300 solar diameters and is 350 light-years from Earth. P Cygni is a luminous blue variable that brightened suddenly to 3rd magnitude in 1600 AD. Since 1715, the star has been of 5th magnitude, despite being more than 5000 light-years from Earth. The star's spectrum is unusual in that it contains very strong emission lines resulting from surrounding nebulosity. W Cygni is a semi-regular variable red giant star, 618 light-years from Earth.It has a maximum magnitude of 5.10 and a minimum magnitude 6.83; its period of 131 days. It is a red giant ranging between spectral types M4e-M6e(Tc:)III, NML Cygni is a red hypergiant semi-regular variable star located at 5,300 light-years away from Earth. It is one of largest stars currently known in the galaxy with a radius exceeding 1,000 solar radii. Its magnitude is around 16.6, its period is about 940 days.
The star KIC 8462852 (Tabby's Star) has received widespread press coverage because of unusual light fluctuations.
Exoplanets
Cygnus is one of the constellations that the Kepler satellite surveyed in its search for exoplanets, and as a result, there are about a hundred stars in Cygnus with known planets, the most of any constellation. One of the most notable systems is the Kepler-11 system, containing six transiting planets, all within a plane of approximately one degree. It was the system with six exoplanets to be discovered. With a spectral type of G6V, the star is somewhat cooler than the Sun. The planets are very close to the star; all but the last planet are closer to Kepler-11 than Mercury is to the Sun, and all the planets are more massive than Earth, and have low densities. The planets have low densities. The naked-eye star 16 Cygni, a triple star approximately 70 light-years from Earth composed two Sun-like stars and a red dwarf, contains a planet orbiting one of the sun-like stars, found due to variations in the star's radial velocity. Gliese 777, another naked-eye multiple star system containing a yellow star and a red dwarf, also contains a planet. The planet is somewhat similar to Jupiter, but with slightly more mass and a more eccentric orbit. The Kepler-22 system is also notable for having the most Earth-like exoplanet when it was discovered in 2011.
Star clusters
The rich background of stars of Cygnus can make it difficult to make out open cluster.
M39 (NGC 7092) is an open cluster 950 light-years from Earth that are visible to the unaided eye under dark skies. It is loose, with about 30 stars arranged over a wide area; their conformation appears triangular. The brightest stars of M39 are of the 7th magnitude. Another open cluster in Cygnus is NGC 6910, also called the Rocking Horse Cluster, possessing 16 stars with a diameter of 5 arcminutes visible in a small amateur instrument; it is of magnitude 7.4. The brightest of these are two gold-hued stars, which represent the bottom of the toy it is named for. A larger amateur instrument reveals 8 more stars, nebulosity to the east and west of the cluster, and a diameter of 9 arcminutes. The nebulosity in this region is part of the Gamma Cygni Nebula. The other stars, approximately 3700 light-years from Earth, are mostly blue-white and very hot.
Other open clusters in Cygnus include Dolidze 9, Collinder 421, Dolidze 11, and Berkeley 90. Dolidze 9, 2800 light-years from Earth and relatively young at 20 million light-years old, is a faint open cluster with up to 22 stars visible in small and medium-sized amateur telescopes. Nebulosity is visible to the north and east of the cluster, which is 7 arcminutes in diameter. The brightest star appears in the eastern part of the cluster and is of the 7th magnitude; another bright star has a yellow hue. Dolidze 11 is an open cluster 400 million years old, farthest away of the three at 3700 light-years. More than 10 stars are visible in an amateur instrument in this cluster, of similar size to Dolidze 9 at 7 arcminutes in diameter, whose brightest star is of magnitude 7.5. It, too, has nebulosity in the east. Collinder 421 is a particularly old open cluster at an age of approximately 1 billion years; it is of magnitude 10.1. 3100 light-years from Earth, more than 30 stars are visible in a diameter of 8 arcseconds. The prominent star in the north of the cluster has a golden color, whereas the stars in the south of the cluster appear orange. Collinder 421 appears to be embedded in nebulosity, which extends past the cluster's borders to its west. Berkeley 90 is a smaller open cluster, with a diameter of 5 arcminutes. More than 16 members appear in an amateur telescope.
Molecular clouds
NGC 6826, the Blinking Planetary Nebula, is a planetary nebula with a magnitude of 8.5, 3200 light-years from Earth. It appears to "blink" in the eyepiece of a telescope because its central star is unusually bright (10th magnitude). When an observer focuses on the star, the nebula appears to fade away. Less than one degree from the Blinking Planetary is the double star 16 Cygni.
The North America Nebula (NGC 7000) is one of the most well-known nebulae in Cygnus, because it is visible to the unaided eye under dark skies, as a bright patch in the Milky Way. However, its characteristic shape is only visible in long-exposure photographs – it is difficult to observe in telescopes because of its low surface brightness. It has low surface brightness because it is so large; at its widest, the North America Nebula is 2 degrees across. Illuminated by a hot embedded star of magnitude 6, NGC 7000 is 1500 light-years from Earth.
To the south of Epsilon Cygni is the Veil Nebula (NGC 6960, 6979, 6992, and 6995), a 5,000-year-old supernova remnant covering approximately 3 degrees of the sky - it is over 50 light-years long. Because of its appearance, it is also called the Cygnus Loop. The Loop is only visible in long-exposure astrophotographs. However, the brightest portion, NGC 6992, is faintly visible in binoculars, and a dimmer portion, NGC 6960, is visible in wide-angle telescopes.
The DR 6 cluster is also nicknamed the "Galactic Ghoul" because of the nebula's resemblance to a human face;
The Gamma Cygni Nebula (IC 1318) includes both bright and dark nebulae in an area of over 4 degrees. DWB 87 is another of the many bright emission nebulae in Cygnus, 7.8 by 4.3 arcminutes. It is in the Gamma Cygni area. Two other emission nebulae include Sharpless 2-112 and Sharpless 2-115. When viewed in an amateur telescope, Sharpless 2–112 appears to be in a teardrop shape. More of the nebula's eastern portion is visible with an O III (doubly ionized oxygen) filter. There is an orange star of magnitude 10 nearby and a star of magnitude 9 near the nebula's northwest edge. Further to the northwest, there is a dark rift and another bright patch. The whole nebula measures 15 arcminutes in diameter. Sharpless 2–115 is another emission nebula with a complex pattern of light and dark patches. Two pairs of stars appear in the nebula; it is larger near the southwestern pair. The open cluster Berkeley 90 is embedded in this large nebula, which measures 30 by 20 arcminutes.
Also of note is the Crescent Nebula (NGC 6888), located between Gamma and Eta Cygni, which was formed by the Wolf–Rayet star HD 192163.
In recent years, amateur astronomers have made some notable Cygnus discoveries. The "Soap bubble nebula" (PN G75.5+1.7), near the Crescent nebula, was discovered on a digital image by Dave Jurasevich in 2007. In 2011, Austrian amateur Matthias Kronberger discovered a planetary nebula (Kronberger 61, now nicknamed "The Soccer Ball") on old survey photos, confirmed recently in images by the Gemini Observatory; both of these are likely too faint to be detected by eye in a small amateur scope.
But a much more obscure and relatively 'tiny' object—one which is readily seen in dark skies by amateur telescopes, under good conditions—is the newly discovered nebula (likely reflection type) associated with the star 4 Cygni (HD 183056): an approximately fan-shaped glowing region of several arcminutes' diameter, to the south and west of the fifth-magnitude star. It was first discovered visually near San Jose, California and publicly reported by amateur astronomer Stephen Waldee in 2007, and was confirmed photographically by Al Howard in 2010. California amateur astronomer Dana Patchick also says he detected it on the Palomar Observatory survey photos in 2005 but had not published it for others to confirm and analyze at the time of Waldee's first official notices and later 2010 paper.
Cygnus X is the largest star-forming region in the solar neighborhood and includes not only some of the brightest and most massive stars known (such as Cygnus OB2-12), but also Cygnus OB2, a massive stellar association classified by some authors as a young globular cluster.
Deep space objects
Cygnus A is the first radio galaxy discovered; at a distance of 730 million light-years from Earth, it is the closest powerful radio galaxy. In the visible spectrum, it appears as an elliptical galaxy in a small cluster. It is classified as an active galaxy because the supermassive black hole at its nucleus is accreting matter, which produces two jets of matter from the poles. The jets' interaction with the interstellar medium creates radio lobes, one source of radio emissions.
Other features
Cygnus is also the apparent source of the WIMP-wind due to the orientation of the solar system's rotation through the galactic halo.
The local Orion-Cygnus Arm and the distant Cygnus Arm are two minor galactic arms named after Cygnus for lying in its background.
| Physical sciences | Other | Astronomy |
6423 | https://en.wikipedia.org/wiki/Calorie | Calorie | The calorie is a unit of energy that originated from the caloric theory of heat. The large calorie, food calorie, dietary calorie, kilocalorie, or kilogram calorie is defined as the amount of heat needed to raise the temperature of one liter of water by one degree Celsius (or one kelvin). The small calorie or gram calorie is defined as the amount of heat needed to cause the same increase in one milliliter of water. Thus, 1 large calorie is equal to 1,000 small calories.
In nutrition and food science, the term calorie and the symbol cal may refer to the large unit or to the small unit in different regions of the world. It is generally used in publications and package labels to express the energy value of foods in per serving or per weight, recommended dietary caloric intake, metabolic rates, etc. Some authors recommend the spelling Calorie and the symbol Cal (both with a capital C) if the large calorie is meant, to avoid confusion; however, this convention is often ignored.
In physics and chemistry, the word calorie and its symbol usually refer to the small unit, the large one being called kilocalorie (kcal). However, the kcal is not officially part of the International System of Units (SI), and is regarded as obsolete, having been replaced in many uses by the SI derived unit of energy, the joule (J), or the kilojoule (kJ) for 1000 joules.
The precise equivalence between calories and joules has varied over the years, but in thermochemistry and nutrition it is now generally assumed that one (small) calorie (thermochemical calorie) is equal to exactly 4.184 J, and therefore one kilocalorie (one large calorie) is 4184 J or 4.184 kJ.
History
The term "calorie" comes . It was first introduced by Nicolas Clément, as a unit of heat energy, in lectures on experimental calorimetry during the years 1819–1824. This was the "large" calorie. The term (written with lowercase "c") entered French and English dictionaries between 1841 and 1867.
The same term was used for the "small" unit by Pierre Antoine Favre (chemist) and Johann T. Silbermann (physicist) in 1852.
In 1879, Marcellin Berthelot distinguished between gram-calorie and kilogram-calorie, and proposed using "Calorie", with capital "C", for the large unit. This usage was adopted by Wilbur Olin Atwater, a professor at Wesleyan University, in 1887, in an influential article on the energy content of food.
The smaller unit was used by U.S. physician Joseph Howard Raymond, in his classic 1894 textbook A Manual of Human Physiology. He proposed calling the "large" unit "kilocalorie", but the term did not catch on until some years later.
The small calorie (cal) was recognized as a unit of the CGS system in 1896, alongside the already-existing CGS unit of energy, the erg (first suggested by Clausius in 1864, under the name ergon, and officially adopted in 1882).
In 1928, there were already serious complaints about the possible confusion arising from the two main definitions of the calorie and whether the notion of using the capital letter to distinguish them was sound.
The joule was the officially adopted SI unit of energy at the ninth General Conference on Weights and Measures in 1948. The calorie was mentioned in the 7th edition of the SI brochure as an example of a non-SI unit.
The alternate spelling is a less-common, non-standard variant.
Definitions
The "small" calorie is broadly defined as the amount of energy needed to increase the temperature of 1 gram of water by 1 °C (or 1 K, which is the same increment, a gradation of one percent of the interval between the melting point and the boiling point of water). The actual amount of energy required to accomplish this temperature increase depends on the atmospheric pressure and the starting temperature; different choices of these parameters have resulted in several different precise definitions of the unit.
The two definitions most common in older literature appear to be the 15 °C calorie and the thermochemical calorie. Until 1948, the latter was defined as 4.1833 international joules; the current standard of 4.184 J was chosen to have the new thermochemical calorie represent the same quantity of energy as before.
Usage
Nutrition
In the United States, in a nutritional context, the "large" unit is used almost exclusively. It is generally written "calorie" with lowercase "c" and symbol "cal", even in government publications. The SI unit kilojoule (kJ) may be used instead, in legal or scientific contexts. Most American nutritionists prefer the unit kilocalorie to the unit kilojoules, whereas most physiologists prefer to use kilojoules. In the majority of other countries, nutritionists prefer the kilojoule to the kilocalorie.
In the European Union, on nutrition facts labels, energy is expressed in both kilojoules and kilocalories, abbreviated as "kJ" and "kcal" respectively.
In China, only kilojoules are given.
Food energy
The unit is most commonly used to express food energy, namely the specific energy (energy per mass) of metabolizing different types of food. For example, fat (triglyceride lipids) contains 9 kilocalories per gram (kcal/g), while carbohydrates (sugar and starch) and protein contain approximately 4 kcal/g. Alcohol in food contains 7 kcal/g. The "large" unit is also used to express recommended nutritional intake or consumption, as in "calories per day".
Dieting is the practice of eating food in a regulated way to decrease, maintain, or increase body weight, or to prevent and treat diseases such as diabetes and obesity. As weight loss depends on reducing caloric intake, different kinds of calorie-reduced diets have been shown to be generally effective.
Chemistry and physics
In other scientific contexts, the term "calorie" and the symbol "cal" almost always refers to the small unit; the "large" unit being generally called "kilocalorie" with symbol "kcal". It is mostly used to express the amount of energy released in a chemical reaction or phase change, typically per mole of substance, as in kilocalories per mole. It is also occasionally used to specify other energy quantities that relate to reaction energy, such as enthalpy of formation and the size of activation barriers. However, it is increasingly being superseded by the SI unit, the joule (J); and metric multiples thereof, such as the kilojoule (kJ).
The lingering use in chemistry is largely due to the fact that the energy released by a reaction in aqueous solution, expressed in kilocalories per mole of reagent, is numerically close to the concentration of the reagent in moles per liter multiplied by the change in the temperature of the solution in kelvins or degrees Celsius. However, this estimate assumes that the volumetric heat capacity of the solution is 1 kcal/(L⋅K), which is not exact even for pure water.
| Physical sciences | Energy, power, force and pressure | null |
6424 | https://en.wikipedia.org/wiki/Corona%20Australis | Corona Australis | Corona Australis is a constellation in the Southern Celestial Hemisphere. Its Latin name means "southern crown", and it is the southern counterpart of Corona Borealis, the northern crown. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The Ancient Greeks saw Corona Australis as a wreath rather than a crown and associated it with Sagittarius or Centaurus. Other cultures have likened the pattern to a turtle, ostrich nest, a tent, or even a hut belonging to a rock hyrax.
Although fainter than its northern counterpart, the oval- or horseshoe-shaped pattern of its brighter stars renders it distinctive. Alpha and Beta Coronae Australis are the two brightest stars with an apparent magnitude of around 4.1. Epsilon Coronae Australis is the brightest example of a W Ursae Majoris variable in the southern sky. Lying alongside the Milky Way, Corona Australis contains one of the closest star-forming regions to the Solar System—a dusty dark nebula known as the Corona Australis Molecular Cloud, lying about 430 light years away. Within it are stars at the earliest stages of their lifespan. The variable stars R and TY Coronae Australis light up parts of the nebula, which varies in brightness accordingly.
Name
The name of the constellation was entered as "Corona Australis" when the International Astronomical Union (IAU) established the 88 modern constellations in 1922.
In 1932, the name was instead recorded as "Corona Austrina" when the IAU's commission on notation approved a list of four-letter abbreviations for the constellations.
The four-letter abbreviations were repealed in 1955. The IAU presently uses "Corona Australis" exclusively.
Characteristics
Corona Australis is a small constellation bordered by Sagittarius to the north, Scorpius to the west, Telescopium to the south, and Ara to the southwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrA". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.77° and −45.52°. Covering 128 square degrees, Corona Australis culminates at midnight around the 30th of June and ranks 80th in area. Only visible at latitudes south of 53° north, Corona Australis cannot be seen from the British Isles as it lies too far south, but it can be seen from southern Europe and readily from the southern United States.
Features
While not a bright constellation, Corona Australis is nonetheless distinctive due to its easily identifiable pattern of stars, which has been described as horseshoe- or oval-shaped. Though it has no stars brighter than 4th magnitude, it still has 21 stars visible to the unaided eye (brighter than magnitude 5.5). Nicolas Louis de Lacaille used the Greek letters Alpha through to Lambda to label the most prominent eleven stars in the constellation, designating two stars as Eta and omitting Iota altogether. Mu Coronae Australis, a yellow star of spectral type G5.5III and apparent magnitude 5.21, was labelled by Johann Elert Bode and retained by Benjamin Gould, who deemed it bright enough to warrant naming.
Stars
The only star in the constellation to have received a name is Alfecca Meridiana or Alpha CrA. The name combines the Arabic name of the constellation with the Latin for "southern". In Arabic, Alfecca means "break", and refers to the shape of both Corona Australis and Corona Borealis. Also called simply "Meridiana", it is a white main sequence star located 125 light years away from Earth, with an apparent magnitude of 4.10 and spectral type A2Va. A rapidly rotating star, it spins at almost 200 km per second at its equator, making a complete revolution in around 14 hours. Like the star Vega, it has excess infrared radiation, which indicates it may be ringed by a disk of dust. It is currently a main-sequence star, but will eventually evolve into a white dwarf; currently, it has a luminosity 31 times greater, and a radius and mass of 2.3 times that of the Sun. Beta Coronae Australis is an orange giant 474 light years from Earth. Its spectral type is K0II, and it is of apparent magnitude 4.11. Since its formation, it has evolved from a B-type star to a K-type star. Its luminosity class places it as a bright giant; its luminosity is 730 times that of the Sun, designating it one of the highest-luminosity K0-type stars visible to the naked eye. 100 million years old, it has a radius of 43 solar radii () and a mass of between 4.5 and 5 solar masses (). Alpha and Beta are so similar as to be indistinguishable in brightness to the naked eye.
Some of the more prominent double stars include Gamma Coronae Australis—a pair of yellowish white stars 58 light years away from Earth, which orbit each other every 122 years. Widening since 1990, the two stars can be seen as separate with a 100 mm aperture telescope; they are separated by 1.3 arcseconds at an angle of 61 degrees. They have a combined visual magnitude of 4.2; each component is an F8V dwarf star with a magnitude of 5.01. Epsilon Coronae Australis is an eclipsing binary belonging to a class of stars known as W Ursae Majoris variables. These star systems are known as contact binaries as the component stars are so close together they touch. Varying by a quarter of a magnitude around an average apparent magnitude of 4.83 every seven hours, the star system lies 98 light years away. Its spectral type is F4VFe-0.8+. At the southern end of the crown asterism are the stars Eta1 and Eta2 CrA, which form an optical double. Of magnitude 5.1 and 5.5, they are separable with the naked eye and are both white. Kappa Coronae Australis is an easily resolved optical double—the components are of apparent magnitudes 6.3 and 5.6 and are about 1000 and 150 light years away respectively. They appear at an angle of 359 degrees, separated by 21.6 arcseconds. Kappa2 is actually the brighter of the pair and is more bluish white, with a spectral type of B9V, while Kappa1 is of spectral type A0III. Lying 202 light years away, Lambda Coronae Australis is a double splittable in small telescopes. The primary is a white star of spectral type A2Vn and magnitude of 5.1, while the companion star has a magnitude of 9.7. The two components are separated by 29.2 arcseconds at an angle of 214 degrees.
Zeta Coronae Australis is a rapidly rotating main sequence star with an apparent magnitude of 4.8, 221.7 light years from Earth. The star has blurred lines in its hydrogen spectrum due to its rotation. Its spectral type is B9V. Theta Coronae Australis lies further to the west, a yellow giant of spectral type G8III and apparent magnitude 4.62. Corona Australis harbours RX J1856.5-3754, an isolated neutron star that is thought to lie 140 (±40) parsecs, or 460 (±130) light years, away, with a diameter of 14 km. It was once suspected to be a strange star, but this has been discounted.
Corona Australis Molecular Cloud
The Corona Australis Molecular Cloud is a dark molecular cloud just north of Beta Coronae Australis. Illuminated by a number of embedded reflection nebulae the cloud fans out from Epsilon Coronae Australis eastward along the constellation border with Sagittarius. It contains , Herbig–Haro objects (protostars) and some very young stars, being one of the closest star-forming regions, 430 light years (130 parsecs) to the Solar System, at the surface of the Local Bubble. The first nebulae of the cloud were recorded in 1865 by Johann Friedrich Julius Schmidt.
Between Epsilon and Gamma Coronae Australis the cloud consists of the particular dark nebula and star forming region Bernes 157. It is 55 by 18 arcminutes wide and possesses several stars around magnitude 13. These stars are dimmed by up to 8 magnitudes because of the obscuring dust clouds. At the center of the active star-forming region lies the Coronet cluster (also called R CrA Cluster), which is used in studying star and protoplanetary disk formation. R Coronae Australis (R CrA) is an irregular variable star ranging from magnitudes 9.7 to 13.9. Blue-white, it is of spectral type B5IIIpe. A very young star, it is still accumulating interstellar material. It is obscured by, and illuminates, the surrounding nebula, NGC 6729, which brightens and darkens with it. The nebula is often compared to a comet for its appearance in a telescope, as its length is five times its width. Other stars of the cluster include S Coronae Australis, a G-class dwarf and T Tauri star.
Nearby north, another young variable star, TY Coronae Australis, illuminates another nebula: reflection nebula NGC 6726/NGC 6727. TY Coronae Australis ranges irregularly between magnitudes 8.7 and 12.4, and the brightness of the nebula varies with it. Blue-white, it is of spectral type B8e. The largest young stars in the region, R, S, T, TY and VV Coronae Australis, are all ejecting jets of material which cause surrounding dust and gas to coalesce and form Herbig–Haro objects, many of which have been identified nearby.
Not part of it is the globular cluster known as NGC 6723, which can be seen adjacent to the nebulosity in the neighbouring constellation of Sagittarius, but is much much further away.
Deep sky objects
IC 1297 is a planetary nebula of apparent magnitude 10.7, which appears as a green-hued roundish object in higher-powered amateur instruments. The nebula surrounds the variable star RU Coronae Australis, which has an average apparent magnitude of 12.9 and is a WC class Wolf–Rayet star. IC 1297 is small, at only 7 arcseconds in diameter; it has been described as "a square with rounded edges" in the eyepiece, elongated in the north–south direction. Descriptions of its color encompass blue, blue-tinged green, and green-tinged blue.
Corona Australis' location near the Milky Way means that galaxies are uncommonly seen. NGC 6768 is a magnitude 11.2 object 35′ south of IC 1297. It is made up of two galaxies merging, one of which is an elongated elliptical galaxy of classification E4 and the other a lenticular galaxy of classification S0. IC 4808 is a galaxy of apparent magnitude 12.9 located on the border of Corona Australis with the neighbouring constellation of Telescopium and 3.9 degrees west-southwest of Beta Sagittarii. However, amateur telescopes will only show a suggestion of its spiral structure. It is 1.9 arcminutes by 0.8 arcminutes. The central area of the galaxy does appear brighter in an amateur instrument, which shows it to be tilted northeast–southwest.
Southeast of Theta and southwest of Eta lies the open cluster ESO 281-SC24, which is composed of the yellow 9th magnitude star GSC 7914 178 1 and five 10th to 11th magnitude stars. Halfway between Theta Coronae Australis and Theta Scorpii is the dense globular cluster NGC 6541. Described as between magnitude 6.3 and magnitude 6.6, it is visible in binoculars and small telescopes. Around 22000 light years away, it is around 100 light years in diameter. It is estimated to be around 14 billion years old. NGC 6541 appears 13.1 arcminutes in diameter and is somewhat resolvable in large amateur instruments; a 12-inch telescope reveals approximately 100 stars but the core remains unresolved.
Meteor showers
The Corona Australids are a meteor shower that takes place between 14 and 18 March each year, peaking around 16 March. This meteor shower does not have a high peak hourly rate. In 1953 and 1956, observers noted a maximum of 6 meteors per hour and 4 meteors per hour respectively; in 1955 the shower was "barely resolved". However, in 1992, astronomers detected a peak rate of 45 meteors per hour. The Corona Australids' rate varies from year to year. At only six days, the shower's duration is particularly short, and its meteoroids are small; the stream is devoid of large meteoroids. The Corona Australids were first seen with the unaided eye in 1935 and first observed with radar in 1955. Corona Australid meteors have an entry velocity of 45 kilometers per second. In 2006, a shower originating near Beta Coronae Australis was designated as the Beta Coronae Australids. They appear in May, the same month as a nearby shower known as the May Microscopids, but the two showers have different trajectories and are unlikely to be related.
History
Corona Australis may have been recorded by ancient Mesopotamians in the MUL.APIN, as a constellation called MA.GUR ("The Bark"). However, this constellation, adjacent to SUHUR.MASH ("The Goat-Fish", modern Capricornus), may instead have been modern Epsilon Sagittarii. As a part of the southern sky, MA.GUR was one of the fifteen "stars of Ea".
In the 3rd century BC, the Greek didactic poet Aratus wrote of, but did not name the constellation, instead calling the two crowns Στεφάνοι (Stephanoi). The Greek astronomer Ptolemy described the constellation in the 2nd century AD, though with the inclusion of Alpha Telescopii, since transferred to Telescopium. Ascribing 13 stars to the constellation, he named it Στεφάνος νοτιος (), "Southern Wreath", while other authors associated it with either Sagittarius (having fallen off his head) or Centaurus; with the former, it was called Corona Sagittarii. Similarly, the Romans called Corona Australis the "Golden Crown of Sagittarius". It was known as Parvum Coelum ("Canopy", "Little Sky") in the 5th century. The 18th-century French astronomer Jérôme Lalande gave it the names Sertum Australe ("Southern Garland") and Orbiculus Capitis, while German poet and author Philippus Caesius called it Corolla ("Little Crown") or Spira Australis ("Southern Coil"), and linked it with the Crown of Eternal Life from the New Testament. Seventeenth-century celestial cartographer Julius Schiller linked it to the Diadem of Solomon. Sometimes, Corona Australis was not the wreath of Sagittarius but arrows held in his hand.
Corona Australis has been associated with the myth of Bacchus and Stimula. Jupiter had impregnated Stimula, causing Juno to become jealous. Juno convinced Stimula to ask Jupiter to appear in his full splendor, which the mortal woman could not handle, causing her to burn. After Bacchus, Stimula's unborn child, became an adult and the god of wine, he honored his deceased mother by placing a wreath in the sky.
In Chinese astronomy, the stars of Corona Australis are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ). The constellation itself was known as ti'en pieh ("Heavenly Turtle") and during the Western Zhou period, marked the beginning of winter. However, precession over time has meant that the "Heavenly River" (Milky Way) became the more accurate marker to the ancient Chinese and hence supplanted the turtle in this role. Arabic names for Corona Australis include Al Ķubbah "the Tortoise", Al Ĥibā "the Tent" or Al Udḥā al Na'ām "the Ostrich Nest". It was later given the name Al Iklīl al Janūbiyyah, which the European authors Chilmead, Riccioli and Caesius transliterated as Alachil Elgenubi, Elkleil Elgenubi and Aladil Algenubi respectively.
The ǀXam speaking San people of South Africa knew the constellation as ≠nabbe ta !nu "house of branches"—owned originally by the Dassie (rock hyrax), and the star pattern depicting people sitting in a semicircle around a fire.
The indigenous Boorong people of northwestern Victoria saw it as Won, a boomerang thrown by Totyarguil (Altair). The Aranda people of Central Australia saw Corona Australis as a coolamon carrying a baby, which was accidentally dropped to earth by a group of sky-women dancing in the Milky Way. The impact of the coolamon created Gosses Bluff crater, 175 km west of Alice Springs. The Torres Strait Islanders saw Corona Australis as part of a larger constellation encompassing part of Sagittarius and the tip of Scorpius's tail; the Pleiades and Orion were also associated. This constellation was Tagai's canoe, crewed by the Pleiades, called the Usiam, and Orion, called the Seg. The myth of Tagai says that he was in charge of this canoe, but his crewmen consumed all of the supplies onboard without asking permission. Enraged, Tagai bound the Usiam with a rope and tied them to the side of the boat, then threw them overboard. Scorpius's tail represents a suckerfish, while Eta Sagittarii and Theta Corona Australis mark the bottom of the canoe. On the island of Futuna, the figure of Corona Australis was called Tanuma and in the Tuamotus, it was called Na Kaua-ki-Tonga.
| Physical sciences | Other | Astronomy |
6429 | https://en.wikipedia.org/wiki/Compact%20disc | Compact disc | The compact disc (CD) is a digital optical disc data storage format that was co-developed by Philips and Sony to store and play digital audio recordings. It uses the Compact Disc Digital Audio format which typically provides 74 minutes of audio on a disc. In later years, the compact disc was adapted for non-audio computer data storage purposes as CD-ROM and its derivatives. First released in Japan in October 1982, the CD was the second optical disc technology to be invented, after the much larger LaserDisc (LD). By 2007, 200 billion CDs (including audio CDs, CD-ROMs and CD-Rs) had been sold worldwide.
Standard CDs have a diameter of , and are designed to hold up to 74 minutes of uncompressed stereo digital audio or about of data. Capacity is routinely extended to 80 minutes and , 90 minutes , or 99 minutes by arranging data more closely on the same-sized disc. The Mini CD has various diameters ranging from ; they have been used for CD singles or delivering device drivers.
The CD gained rapid popularity in the 1990s, quickly outselling all other audio formats in the United States by 1991, ending the market dominance of the phonograph record and the cassette tape. By 2000, the CD accounted for 92.3% of the entire market share in regard to US music sales. The CD is considered the last dominant audio format of the album era, as the rise of MP3, iTunes, cellular ringtones, and other downloadable music formats in the mid-2000s ended the decade-long dominance of the CD.
The format was later adapted (as CD-ROM) for general purpose data storage and initially could hold much more data than a personal computer hard disk drive. Several other formats were further derived, both pre-pressed and blank user writable, including write-once audio and data storage (CD-R), rewritable media (CD-RW), Video CD (VCD), Super Video CD (SVCD), Photo CD, Picture CD, Compact Disc-Interactive (CD-i), Enhanced Music CD, and Super Audio CD (SACD) which may have a CD-DA layer.
History
Physical details
A CD is made from thick, polycarbonate plastic, and weighs 14–33 grams. From the center outward, components are: the center spindle hole (15 mm), the first-transition area (clamping ring), the clamping area (stacking ring), the second-transition area (mirror band), the program (data) area, and the rim. The inner program area occupies a radius from 25 to 58 mm.
A thin layer of aluminum or, more rarely, gold is applied to the surface, making it reflective. The metal is protected by a film of lacquer normally spin coated directly on the reflective layer. The label is printed on the lacquer layer, usually by screen printing or offset printing.
CD data is represented as tiny indentations known as pits, encoded in a spiral track molded into the top of the polycarbonate layer. The areas between pits are known as lands. Each pit is approximately 100 nm deep by 500 nm wide, and varies from 850 nm to 3.5 μm in length. The distance between the windings (the pitch) is 1.6 μm (measured center-to-center, not between the edges).
When playing an audio CD, a motor within the CD player spins the disc to a scanning velocity of 1.2–1.4 m/s (constant linear velocity, CLV)—equivalent to approximately 500 RPM at the inside of the disc, and approximately 200 RPM at the outside edge. The track on the CD begins at the inside and spirals outward so a disc played from beginning to end slows its rotation rate during playback.
The program area is 86.05 cm2 and the length of the recordable spiral is With a scanning speed of 1.2 m/s, the playing time is 74 minutes or 650 MiB of data on a CD-ROM. A disc with data packed slightly more densely is tolerated by most players (though some old ones fail). Using a linear velocity of 1.2 m/s and a narrower track pitch of 1.5 μm increases the playing time to 80 minutes, and data capacity to 700 MiB. Even denser tracks are possible, with semi-standard 90 minute/800 MiB discs having 1.33 μm, and 99 minute/870 MiB having 1.26 μm, but compatibility suffers as density increases.
A CD is read by focusing a 780 nm wavelength (near infrared) semiconductor laser through the bottom of the polycarbonate layer. The change in height between pits and lands results in a difference in the way the light is reflected. Because the pits are indented into the top layer of the disc and are read through the transparent polycarbonate base, the pits form bumps when read. The laser hits the disc, casting a circle of light wider than the modulated spiral track reflecting partially from the lands and partially from the top of any bumps where they are present. As the laser passes over a pit (bump), its height means that the round trip path of the light reflected from its peak is 1/2 wavelength out of phase with the light reflected from the land around it. This is because the height of a bump is around 1/4 of the wavelength of the light used, so the light falls 1/4 out of phase before reflection and another 1/4 wavelength out of phase after reflection. This causes partial cancellation of the laser's reflection from the surface. By measuring the reflected intensity change with a photodiode, a modulated signal is read back from the disc.
To accommodate the spiral pattern of data, the laser is placed on a mobile mechanism within the disc tray of any CD player. This mechanism typically takes the form of a sled that moves along a rail. The sled can be driven by a worm gear or linear motor. Where a worm gear is used, a second shorter-throw linear motor, in the form of a coil and magnet, makes fine position adjustments to track eccentricities in the disk at high speed. Some CD drives (particularly those manufactured by Philips during the 1980s and early 1990s) use a swing arm similar to that seen on a gramophone.
The pits and lands do not directly represent the 0s and 1s of binary data. Instead, non-return-to-zero, inverted encoding is used: a change from either pit to land or land to pit indicates a 1, while no change indicates a series of 0s. There must be at least two, and no more than ten 0s between each 1, which is defined by the length of the pit. This, in turn, is decoded by reversing the eight-to-fourteen modulation used in mastering the disc, and then reversing the cross-interleaved Reed–Solomon coding, finally revealing the raw data stored on the disc. These encoding techniques (defined in the Red Book) were originally designed for CD Digital Audio, but they later became a standard for almost all CD formats (such as CD-ROM).
Integrity
CDs are susceptible to damage during handling and from environmental exposure. Pits are much closer to the label side of a disc, enabling defects and contaminants on the clear side to be out of focus during playback. Consequently, CDs are more likely to suffer damage on the label side of the disc. Scratches on the clear side can be repaired by refilling them with similar refractive plastic or by careful polishing. The edges of CDs are sometimes incompletely sealed, allowing gases and liquids to enter the CD and corrode the metal reflective layer and/or interfere with the focus of the laser on the pits, a condition known as disc rot. The fungus Geotrichum candidum has been found—under conditions of high heat and humidity—to consume the polycarbonate plastic and aluminium found in CDs.
The data integrity of compact discs can be measured using surface error scanning, which can measure the rates of different types of data errors, known as C1, C2, CU and extended (finer-grain) error measurements known as E11, E12, E21, E22, E31 and E32, of which higher rates indicate a possibly damaged or unclean data surface, low media quality, deteriorating media and recordable media written to by a malfunctioning CD writer.
Error scanning can reliably predict data losses caused by media deterioration. Support of error scanning differs between vendors and models of optical disc drives, and extended error scanning (known as "advanced error scanning" in Nero DiscSpeed) has only been available on Plextor and some BenQ optical drives so far, as of 2020.
Disc shapes and diameters
The digital data on a CD begins at the center of the disc and proceeds toward the edge, which allows adaptation to the different sizes available. Standard CDs are available in two sizes. By far, the most common is in diameter, with a 74-, 80, 90, or 99-minute audio capacity and a 650, 700, 800, or 870 MiB (737,280,000-byte) data capacity. Discs are thick, with a center hole. The size of the hole was chosen by Joop Sinjou and based on a Dutch 10-cent coin: a dubbeltje. Philips/Sony patented the physical dimensions.
The official Philips history says the capacity was specified by Sony executive Norio Ohga to be able to contain the entirety of Beethoven's Ninth Symphony on one disc.
This is a myth according to Kees Immink, as the EFM code format had not yet been decided in December 1979, when the 120 mm size was adopted. The adoption of EFM in June 1980 allowed 30 percent more playing time that would have resulted in 97 minutes for 120 mm diameter or 74 minutes for a disc as small as . Instead, the information density was lowered by 30 percent to keep the playing time at 74 minutes. The 120 mm diameter has been adopted by subsequent formats, including Super Audio CD, DVD, HD DVD, and Blu-ray Disc. The diameter discs ("Mini CDs") can hold up to 24 minutes of music or 210 MiB.
SHM-CD
SHM-CDs (short for Super High Material Compact Disc) is a variant of the Compact Disc, which replaces the Polycarbonate base with a proprietary material. This material was created during joint research by Universal Music Japan and JVC into manufacturing high-clarity Liquid-crystal displays.
SHM-CDs are fully compatible with all CD players since the difference in light refraction is not detected as an error. JVC claims that the greater fluidity and clarity of the material used for SHM-CDs results in a higher reading accuracy and improved sound quality. However, since the CD-Audio format contains inherent error correction, it is unclear whether a reduction in read errors would be great enough to produce an improved output.
Logical format
Audio CD
The logical format of an audio CD (officially Compact Disc Digital Audio or CD-DA) is described in a document produced in 1980 by the format's joint creators, Sony and Philips. The document is known colloquially as the Red Book CD-DA after the color of its cover. The format is a two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate per channel. Four-channel sound was to be an allowable option within the Red Book format, but has never been implemented. Monaural audio has no existing standard on a Red Book CD; thus, the mono source material is usually presented as two identical channels in a standard Red Book stereo track (i.e., mirrored mono); an MP3 CD, can have audio file formats with mono sound.
CD-Text is an extension of the Red Book specification for an audio CD that allows for the storage of additional text information (e.g., album name, song name, artist) on a standards-compliant audio CD. The information is stored either in the lead-in area of the CD, where there are roughly five kilobytes of space available or in the subcode channels R to W on the disc, which can store about 31 megabytes.
Compact Disc + Graphics is a special audio compact disc that contains graphics data in addition to the audio data on the disc. The disc can be played on a regular audio CD player, but when played on a special CD+G player, it can output a graphics signal (typically, the CD+G player is hooked up to a television set or a computer monitor); these graphics are almost exclusively used to display lyrics on a television set for karaoke performers to sing along with. The CD+G format takes advantage of the channels R through W. These six bits store the graphics information.
CD + Extended Graphics (CD+EG, also known as CD+XG) is an improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G, CD+EG uses basic CD-ROM features to display text and video information in addition to the music being played. This extra data is stored in subcode channels R-W. Very few CD+EG discs have been published.
Super Audio CD
Super Audio CD (SACD) is a high-resolution, read-only optical audio disc format that was designed to provide higher-fidelity digital audio reproduction than the Red Book. Introduced in 1999, it was developed by Sony and Philips, the same companies that created the Red Book. SACD was in a format war with DVD-Audio, but neither has replaced audio CDs. The SACD standard is referred to as the Scarlet Book standard.
Titles in the SACD format can be issued as hybrid discs; these discs contain the SACD audio stream as well as a standard audio CD layer which is playable in standard CD players, thus making them backward compatible.
CD-MIDI
CD-MIDI is a format used to store music-performance data, which upon playback is performed by electronic instruments that synthesize the audio. Hence, unlike the original Red Book CD-DA, these recordings are not digitally sampled audio recordings. The CD-MIDI format is defined as an extension of the original Red Book.
CD-ROM
For the first few years of its existence, the CD was a medium used purely for audio. In 1988, the Yellow Book CD-ROM standard was established by Sony and Philips, which defined a non-volatile optical data computer data storage medium using the same physical format as audio compact discs, readable by a computer with a CD-ROM drive.
Video CD
Video CD (VCD, View CD, and Compact Disc digital video) is a standard digital format for storing video media on a CD. VCDs are playable in dedicated VCD players, most modern DVD-Video players, personal computers, and some video game consoles. The VCD standard was created in 1993 by Sony, Philips, Matsushita, and JVC and is referred to as the White Book standard.
Overall picture quality is intended to be comparable to VHS video. Poorly compressed VCD video can sometimes be of lower quality than VHS video, but VCD exhibits block artifacts rather than analog noise and does not deteriorate further with each use. 352×240 (or SIF) resolution was chosen because it is half the vertical and half the horizontal resolution of the NTSC video. 352×288 is a similarly one-quarter PAL/SECAM resolution. This approximates the (overall) resolution of an analog VHS tape, which, although it has double the number of (vertical) scan lines, has a much lower horizontal resolution.
Super Video CD
Super Video CD (Super Video Compact Disc or SVCD) is a format used for storing video media on standard compact discs. SVCD was intended as a successor to VCD and an alternative to DVD-Video and falls somewhere between both in terms of technical capability and picture quality.
SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD. One CD-R disc can hold up to 60 minutes of standard-quality SVCD-format video. While no specific limit on SVCD video length is mandated by the specification, one must lower the video bit rate, and therefore quality, to accommodate very long videos. It is usually difficult to fit much more than 100 minutes of video onto one SVCD without incurring a significant quality loss, and many hardware players are unable to play a video with an instantaneous bit rate lower than 300 to 600 kilobits per second.
Photo CD
Photo CD is a system designed by Kodak for digitizing and storing photos on a CD. Launched in 1992, the discs were designed to hold nearly 100 high-quality images, scanned prints, and slides using special proprietary encoding. Photo CDs are defined in the Beige Book and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended to play on CD-i players, Photo CD players, and any computer with suitable software (irrespective of operating system). The images can also be printed out on photographic paper with a special Kodak machine. This format is not to be confused with Kodak Picture CD, which is a consumer product in CD-ROM format.
CD-i
The Philips Green Book specifies a standard for interactive multimedia compact discs designed for CD-i players (1993). CD-i discs can contain audio tracks that can be played on regular CD players, but CD-i discs are not compatible with most CD-ROM drives and software. The CD-i Ready specification was later created to improve compatibility with audio CD players, and the CD-i Bridge specification was added to create CD-i-compatible discs that can be accessed by regular CD-ROM drives.
CD-i Ready
Philips defined a format similar to CD-i called CD-i Ready, which puts CD-i software and data into the pregap of track 1. This format was supposed to be more compatible with older audio CD players.
Enhanced Music CD (CD+)
Enhanced Music CD, also known as CD Extra or CD Plus, is a format that combines audio tracks and data tracks on the same disc by putting audio tracks in a first session and data in a second session. It was developed by Philips and Sony, and it is defined in the Blue Book.
VinylDisc
VinylDisc is the hybrid of a standard audio CD and the vinyl record. The vinyl layer on the disc's label side can hold approximately three minutes of music.
Manufacture, cost, and pricing
In 1995, material costs were 30 cents for the jewel case and 10 to 15 cents for the CD. The wholesale cost of CDs was $0.75 to $1.15, while the typical retail price of a prerecorded music CD was $16.98. On average, the store received 35 percent of the retail price, the record company 27 percent, the artist 16 percent, the manufacturer 13 percent, and the distributor 9 percent. When 8-track cartridges, compact cassettes, and CDs were introduced, each was marketed at a higher price than the format they succeeded, even though the cost to produce the media was reduced. This was done because the perceived value increased. This continued from phonograph records to CDs, but was broken when Apple marketed MP3s for $0.99, and albums for $9.99. The incremental cost, though, to produce an MP3 is negligible.
Writable compact discs
Recordable CD
Recordable Compact Discs, CD-Rs, are injection-molded with a blank data spiral. A photosensitive dye is then applied, after which the discs are metalized and lacquer-coated. The write laser of the CD recorder changes the color of the dye to allow the read laser of a standard CD player to see the data, just as it would with a standard stamped disc. The resulting discs can be read by most CD-ROM drives and played in most audio CD players. CD-Rs follow the Orange Book standard.
CD-R recordings are designed to be permanent. Over time, the dye's physical characteristics may change causing read errors and data loss until the reading device cannot recover with error correction methods. Errors can be predicted using surface error scanning. The design life is from 20 to 100 years, depending on the quality of the discs, the quality of the writing drive, and storage conditions. Testing has demonstrated such degradation of some discs in as little as 18 months under normal storage conditions. This failure is known as disc rot, for which there are several, mostly environmental, reasons.
The recordable audio CD is designed to be used in a consumer audio CD recorder. These consumer audio CD recorders use SCMS (Serial Copy Management System), an early form of digital rights management (DRM), to conform to the AHRA (Audio Home Recording Act). The Recordable Audio CD is typically somewhat more expensive than CD-R due to lower production volume and a 3 percent AHRA royalty used to compensate the music industry for the making of a copy.
High-capacity recordable CD is a higher-density recording format that can hold 20% more data than conventional discs. The higher capacity is incompatible with some recorders and recording software.
ReWritable CD
CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write laser, in this case, is used to heat and alter the properties (amorphous vs. crystalline) of the alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in reflectivity as a pressed CD or a CD-R, and so many earlier CD audio players cannot read CD-RW discs, although most later CD audio players and stand-alone DVD players can. CD-RWs follow the Orange Book standard.
The ReWritable Audio CD is designed to be used in a consumer audio CD recorder, which will not (without modification) accept standard CD-RW discs. These consumer audio CD recorders use the Serial Copy Management System (SCMS), an early form of digital rights management (DRM), to conform to the United States' Audio Home Recording Act (AHRA). The ReWritable Audio CD is typically somewhat more expensive than CD-R due to (a) lower volume and (b) a 3 percent AHRA royalty used to compensate the music industry for the making of a copy.
Copy protection
The Red Book audio specification, except for a simple anti-copy statement in the subcode, does not include any copy protection mechanism. Known at least as early as 2001, attempts were made by record companies to market copy-protected non-standard compact discs, which cannot be ripped, or copied, to hard drives or easily converted to other formats (like FLAC, MP3 or Vorbis). One major drawback to these copy-protected discs is that most will not play on either computer CD-ROM drives or some standalone CD players that use CD-ROM mechanisms. Philips has stated that such discs are not permitted to bear the trademarked Compact Disc Digital Audio logo because they violate the Red Book specifications. Numerous copy-protection systems have been countered by readily available, often free, software, or even by simply turning off automatic AutoPlay to prevent the running of the DRM executable program.
| Technology | Non-volatile memory | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.