id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
598434
https://en.wikipedia.org/wiki/Post-glacial%20rebound
Post-glacial rebound
Post-glacial rebound (also called isostatic rebound or crustal rebound) is the rise of land masses after the removal of the huge weight of ice sheets during the last glacial period, which had caused isostatic depression. Post-glacial rebound and isostatic depression are phases of glacial isostasy (glacial isostatic adjustment, glacioisostasy), the deformation of the Earth's crust in response to changes in ice mass distribution. The direct raising effects of post-glacial rebound are readily apparent in parts of Northern Eurasia, Northern America, Patagonia, and Antarctica. However, through the processes of ocean siphoning and continental levering, the effects of post-glacial rebound on sea level are felt globally far from the locations of current and former ice sheets. Overview During the last glacial period, much of northern Europe, Asia, North America, Greenland and Antarctica were covered by ice sheets, which reached up to three kilometres thick during the glacial maximum about 20,000 years ago. The enormous weight of this ice caused the surface of the Earth's crust to deform and warp downward, forcing the viscoelastic mantle material to flow away from the loaded region. At the end of each glacial period when the glaciers retreated, the removal of this weight led to slow (and still ongoing) uplift or rebound of the land and the return flow of mantle material back under the deglaciated area. Due to the extreme viscosity of the mantle, it will take many thousands of years for the land to reach an equilibrium level. The uplift has taken place in two distinct stages. The initial uplift following deglaciation was almost immediate due to the elastic response of the crust as the ice load was removed. After this elastic phase, uplift proceeded by slow viscous flow at an exponentially decreasing rate. Today, typical uplift rates are of the order of 1 cm/year or less. In northern Europe, this is clearly shown by the GPS data obtained by the BIFROST GPS network; for example in Finland, the total area of the country is growing by about seven square kilometers per year. Studies suggest that rebound will continue for at least another 10,000 years. The total uplift from the end of deglaciation depends on the local ice load and could be several hundred metres near the centre of rebound. Recently, the term "post-glacial rebound" is gradually being replaced by the term "glacial isostatic adjustment". This is in recognition that the response of the Earth to glacial loading and unloading is not limited to the upward rebound movement, but also involves downward land movement, horizontal crustal motion, changes in global sea levels and the Earth's gravity field, induced earthquakes, and changes in the Earth's rotation. Another alternate term is "glacial isostasy", because the uplift near the centre of rebound is due to the tendency towards the restoration of isostatic equilibrium (as in the case of isostasy of mountains). Unfortunately, that term gives the wrong impression that isostatic equilibrium is somehow reached, so by appending "adjustment" at the end, the motion of restoration is emphasized. Effects Post-glacial rebound produces measurable effects on vertical crustal motion, global sea levels, horizontal crustal motion, gravity field, Earth's rotation, crustal stress, and earthquakes. Studies of glacial rebound give us information about the flow law of mantle rocks, which is important to the study of mantle convection, plate tectonics and the thermal evolution of the Earth. It also gives insight into past ice sheet history, which is important to glaciology, paleoclimate, and changes in global sea level. Understanding postglacial rebound is also important to our ability to monitor recent global change. Vertical crustal motion Erratic boulders, U-shaped valleys, drumlins, eskers, kettle lakes, bedrock striations are among the common signatures of the Ice Age. In addition, post-glacial rebound has caused numerous significant changes to coastlines and landscapes over the last several thousand years, and the effects continue to be significant. In Sweden, Lake Mälaren was formerly an arm of the Baltic Sea, but uplift eventually cut it off and led to its becoming a freshwater lake in about the 12th century, at the time when Stockholm was founded at its outlet. Marine seashells found in Lake Ontario sediments imply a similar event in prehistoric times. Other pronounced effects can be seen on the island of Öland, Sweden, which has little topographic relief due to the presence of the very level Stora Alvaret. The rising land has caused the Iron Age settlement area to recede from the Baltic Sea, making the present day villages on the west coast set back unexpectedly far from the shore. These effects are quite dramatic at the village of Alby, for example, where the Iron Age inhabitants were known to subsist on substantial coastal fishing. As a result of post-glacial rebound, the Gulf of Bothnia is predicted to eventually close up at Kvarken in more than 2,000 years. The Kvarken is a UNESCO World Natural Heritage Site, selected as a "type area" illustrating the effects of post-glacial rebound and the holocene glacial retreat. In several other Nordic ports, like Tornio and Pori (formerly at Ulvila), the harbour has had to be relocated several times. Place names in the coastal regions also illustrate the rising land: there are inland places named 'island', 'skerry', 'rock', 'point' and 'sound'. For example, Oulunsalo "island of Oulujoki" is a peninsula, with inland names such as Koivukari "Birch Rock", Santaniemi "Sandy Cape", and Salmioja "the brook of the Sound". (Compare and .) In Great Britain, glaciation affected Scotland but not southern England, and the post-glacial rebound of northern Great Britain (up to 10 cm per century) is causing a corresponding downward movement of the southern half of the island (up to 5 cm per century). This will eventually lead to an increased risk of floods in southern England and south-western Ireland. Since the glacial isostatic adjustment process causes the land to move relative to the sea, ancient shorelines are found to lie above present day sea level in areas that were once glaciated. On the other hand, places in the peripheral bulge area which was uplifted during glaciation now begins to subside. Therefore, ancient beaches are found below present day sea level in the bulge area. The "relative sea level data", which consists of height and age measurements of the ancient beaches around the world, tells us that glacial isostatic adjustment proceeded at a higher rate near the end of deglaciation than today. The present-day uplift motion in northern Europe is also monitored by a GPS network called BIFROST. Results of GPS data show a peak rate of about 11 mm/year in the north part of the Gulf of Bothnia, but this uplift rate decreases away and becomes negative outside the former ice margin. In the near field outside the former ice margin, the land sinks relative to the sea. This is the case along the east coast of the United States, where ancient beaches are found submerged below present day sea level and Florida is expected to be submerged in the future. GPS data in North America also confirms that land uplift becomes subsidence outside the former ice margin. Global sea levels To form the ice sheets of the last Ice Age, water from the oceans evaporated, condensed as snow and was deposited as ice in high latitudes. Thus global sea level fell during glaciation. The ice sheets at the last glacial maximum were so massive that global sea level fell by about 120 metres. Thus continental shelves were exposed and many islands became connected with the continents through dry land. This was the case between the British Isles and Europe (Doggerland), or between Taiwan, the Indonesian islands and Asia (Sundaland). A land bridge also existed between Siberia and Alaska that allowed the migration of people and animals during the last glacial maximum. The fall in sea level also affects the circulation of ocean currents and thus has important impact on climate during the glacial maximum. During deglaciation, the melted ice water returns to the oceans, thus sea level in the ocean increases again. However, geological records of sea level changes show that the redistribution of the melted ice water is not the same everywhere in the oceans. In other words, depending upon the location, the rise in sea level at a certain site may be more than that at another site. This is due to the gravitational attraction between the mass of the melted water and the other masses, such as remaining ice sheets, glaciers, water masses and mantle rocks and the changes in centrifugal potential due to Earth's variable rotation. Horizontal crustal motion Accompanying vertical motion is the horizontal motion of the crust. The BIFROST GPS network shows that the motion diverges from the centre of rebound. However, the largest horizontal velocity is found near the former ice margin. The situation in North America is less certain; this is due to the sparse distribution of GPS stations in northern Canada, which is rather inaccessible. Tilt The combination of horizontal and vertical motion changes the tilt of the surface. That is, locations farther north rise faster, an effect that becomes apparent in lakes. The bottoms of the lakes gradually tilt away from the direction of the former ice maximum, such that lake shores on the side of the maximum (typically north) recede and the opposite (southern) shores sink. This causes the formation of new rapids and rivers. For example, Lake Pielinen in Finland, which is large (90 x 30 km) and oriented perpendicularly to the former ice margin, originally drained through an outlet in the middle of the lake near Nunnanlahti to Lake Höytiäinen. The change of tilt caused Pielinen to burst through the Uimaharju esker at the southwestern end of the lake, creating a new river (Pielisjoki) that runs to the sea via Lake Pyhäselkä to Lake Saimaa. The effects are similar to that concerning seashores, but occur above sea level. Tilting of land will also affect the flow of water in lakes and rivers in the future, and thus is important for water resource management planning. In Sweden Lake Sommen's outlet in the northwest has a rebound of 2.36 mm/a while in the eastern Svanaviken it is 2.05 mm/a. This means the lake is being slowly tilted and the southeastern shores drowned. Gravity field Ice, water, and mantle rocks have mass, and as they move around, they exert a gravitational pull on other masses towards them. Thus, the gravity field, which is sensitive to all mass on the surface and within the Earth, is affected by the redistribution of ice/melted water on the surface of the Earth and the flow of mantle rocks within. Today, more than 6000 years after the last deglaciation terminated, the flow of mantle material back to the glaciated area causes the overall shape of the Earth to become less oblate. This change in the topography of Earth's surface affects the long-wavelength components of the gravity field. The changing gravity field can be detected by repeated land measurements with absolute gravimeters and recently by the GRACE satellite mission. The change in long-wavelength components of Earth's gravity field also perturbs the orbital motion of satellites and has been detected by LAGEOS satellite motion. Vertical datum The vertical datum is a reference surface for altitude measurement and plays vital roles in many human activities, including land surveying and construction of buildings and bridges. Since postglacial rebound continuously deforms the crustal surface and the gravitational field, the vertical datum needs to be redefined repeatedly through time. State of stress, intraplate earthquakes and volcanism According to the theory of plate tectonics, plate-plate interaction results in earthquakes near plate boundaries. However, large earthquakes are found in intraplate environments like eastern Canada (up to M7) and northern Europe (up to M5) which are far away from present-day plate boundaries. An important intraplate earthquake was the magnitude 8 New Madrid earthquake that occurred in mid-continental US in the year 1811. Glacial loads provided more than 30 MPa of vertical stress in northern Canada and more than 20 MPa in northern Europe during glacial maximum. This vertical stress is supported by the mantle and the flexure of the lithosphere. Since the mantle and the lithosphere continuously respond to the changing ice and water loads, the state of stress at any location continuously changes in time. The changes in the orientation of the state of stress is recorded in the postglacial faults in southeastern Canada. When the postglacial faults formed at the end of deglaciation 9000 years ago, the horizontal principal stress orientation was almost perpendicular to the former ice margin, but today the orientation is in the northeast–southwest, along the direction of seafloor spreading at the Mid-Atlantic Ridge. This shows that the stress due to postglacial rebound had played an important role at deglacial time, but has gradually relaxed so that tectonic stress has become more dominant today. According to the Mohr–Coulomb theory of rock failure, large glacial loads generally suppress earthquakes, but rapid deglaciation promotes earthquakes. According to Wu & Hasagawa, the rebound stress that is available to trigger earthquakes today is of the order of 1 MPa. This stress level is not large enough to rupture intact rocks but is large enough to reactivate pre-existing faults that are close to failure. Thus, both postglacial rebound and past tectonics play important roles in today's intraplate earthquakes in eastern Canada and southeast US. Generally postglacial rebound stress could have triggered the intraplate earthquakes in eastern Canada and may have played some role in triggering earthquakes in the eastern US including the New Madrid earthquakes of 1811. The situation in northern Europe today is complicated by the current tectonic activities nearby and by coastal loading and weakening. Increasing pressure due to the weight of the ice during glaciation may have suppressed melt generation and volcanic activities below Iceland and Greenland. On the other hand, decreasing pressure due to deglaciation can increase the melt production and volcanic activities by 20-30 times. Recent global warming Recent global warming has caused mountain glaciers and the ice sheets in Greenland and Antarctica to melt and global sea level to rise. Therefore, monitoring sea level rise and the mass balance of ice sheets and glaciers allows people to understand more about global warming. Recent rise in sea levels has been monitored by tide gauges and satellite altimetry (e.g. TOPEX/Poseidon). As well as the addition of melted ice water from glaciers and ice sheets, recent sea level changes are affected by the thermal expansion of sea water due to global warming, sea level change due to deglaciation of the last glacial maximum (postglacial sea level change), deformation of the land and ocean floor and other factors. Thus, to understand global warming from sea level change, one must be able to separate all these factors, especially postglacial rebound, since it is one of the leading factors. Mass changes of ice sheets can be monitored by measuring changes in the ice surface height, the deformation of the ground below and the changes in the gravity field over the ice sheet. Thus ICESat, GPS and GRACE satellite mission are useful for such purpose. However, glacial isostatic adjustment of the ice sheets affect ground deformation and the gravity field today. Thus understanding glacial isostatic adjustment is important in monitoring recent global warming. One of the possible impacts of global warming-triggered rebound may be more volcanic activity in previously ice-capped areas such as Iceland and Greenland. It may also trigger intraplate earthquakes near the ice margins of Greenland and Antarctica. Unusually rapid (up to 4.1 cm/year) present glacial isostatic rebound due to recent ice mass losses in the Amundsen Sea embayment region of Antarctica coupled with low regional mantle viscosity is predicted to provide a modest stabilizing influence on marine ice sheet instability in West Antarctica, but likely not to a sufficient degree to arrest it. Applications The speed and amount of postglacial rebound is determined by two factors: the viscosity or rheology (i.e., the flow) of the mantle, and the ice loading and unloading histories on the surface of Earth. The viscosity of the mantle is important in understanding mantle convection, plate tectonics, the dynamical processes in Earth, and the thermal state and thermal evolution of Earth. However viscosity is difficult to observe because creep experiments of mantle rocks at natural strain rates would take thousands of years to observe and the ambient temperature and pressure conditions are not easy to attain for a long enough time. Thus, the observations of postglacial rebound provide a natural experiment to measure mantle rheology. Modelling of glacial isostatic adjustment addresses the question of how viscosity changes in the radial and lateral directions and whether the flow law is linear, nonlinear, or composite rheology. Mantle viscosity may additionally be estimated using seismic tomography, where seismic velocity is used as a proxy observable. Ice thickness histories are useful in the study of paleoclimatology, glaciology and paleo-oceanography. Ice thickness histories are traditionally deduced from the three types of information: First, the sea level data at stable sites far away from the centers of deglaciation give an estimate of how much water entered the oceans or equivalently how much ice was locked up at glacial maximum. Secondly, the location and dates of terminal moraines tell us the areal extent and retreat of past ice sheets. Physics of glaciers gives us the theoretical profile of ice sheets at equilibrium, it also says that the thickness and horizontal extent of equilibrium ice sheets are closely related to the basal condition of the ice sheets. Thus the volume of ice locked up is proportional to their instantaneous area. Finally, the heights of ancient beaches in the sea level data and observed land uplift rates (e.g. from GPS or VLBI) can be used to constrain local ice thickness. A popular ice model deduced this way is the ICE5G model. Because the response of the Earth to changes in ice height is slow, it cannot record rapid fluctuation or surges of ice sheets, thus the ice sheet profiles deduced this way only gives the "average height" over a thousand years or so. Glacial isostatic adjustment also plays an important role in understanding recent global warming and climate change. Discovery Before the eighteenth century, it was thought, in Sweden, that sea levels were falling. On the initiative of Anders Celsius a number of marks were made in rock on different locations along the Swedish coast. In 1765 it was possible to conclude that it was not a lowering of sea levels but an uneven rise of land. In 1865 Thomas Jamieson came up with a theory that the rise of land was connected with the ice age that had been first discovered in 1837. The theory was accepted after investigations by Gerard De Geer of old shorelines in Scandinavia published in 1890. Legal implications In areas where the rising of land is seen, it is necessary to define the exact limits of property. In Finland, the "new land" is legally the property of the owner of the water area, not any land owners on the shore. Therefore, if the owner of the land wishes to build a pier over the "new land", they need the permission of the owner of the (former) water area. The landowner of the shore may redeem the new land at market price. Usually the owner of the water area is the partition unit of the landowners of the shores, a collective holding corporation. Formulation: sea-level equation The sea-level equation (SLE) is a linear integral equation that describes the sea-level variations associated with the PGR. The basic idea of the SLE dates back to 1888, when Woodward published his pioneering work on the form and position of mean sea level, and only later has been refined by Platzman and Farrell in the context of the study of the ocean tides. In the words of Wu and Peltier, the solution of the SLE yields the space– and time–dependent change of ocean bathymetry which is required to keep the gravitational potential of the sea surface constant for a specific deglaciation chronology and viscoelastic earth model. The SLE theory was then developed by other authors as Mitrovica & Peltier, Mitrovica et al. and Spada & Stocchi. In its simplest form, the SLE reads where is the sea–level change, is the sea surface variation as seen from Earth's center of mass, and is vertical displacement. In a more explicit form the SLE can be written as follow: where is colatitude and is longitude, is time, and are the densities of ice and water, respectively, is the reference surface gravity, is the sea–level Green's function (dependent upon the and viscoelastic load–deformation coefficients - LDCs), is the ice thickness variation, represents the eustatic term (i.e. the ocean–averaged value of ), and denote spatio-temporal convolutions over the ice- and ocean-covered regions, and the overbar indicates an average over the surface of the oceans that ensures mass conservation.
Physical sciences
Glacial landforms
Earth science
599009
https://en.wikipedia.org/wiki/Brass%20knuckles
Brass knuckles
Brass knuckles (also referred to as brass knucks, knuckledusters, iron fist and paperweight, among other names) are a melee weapon used primarily in hand-to-hand combat. They are fitted and designed to be worn around the knuckles of the human hand. Despite their name, they are often made from other metals, plastics or carbon fibers and not necessarily brass. Designed to preserve and concentrate a punch's force by directing it toward a harder and smaller contact area, they result in increased tissue disruption, including an increased likelihood of fracturing the intended target's bones on impact. The extended and rounded palm grip also spreads the counter-force across the attacker's palm, which would otherwise have been absorbed primarily by the attacker's fingers. This reduces the likelihood of damage to the attacker's fingers. The weapon has been controversial for its easy concealability and is illegal to own and use in a number of countries. History and variations During the 18th century. Cast iron, brass, lead, and wood knuckles were made in the United States during the American Civil War (1861–1865). Soldiers would often buy cast iron or brass knuckles. If they could not buy them, they would carve their own from wood, or cast them at camp by melting lead bullets and using a mold in the dirt. Some brass knuckles have rounded rings, which increase the impact of blows from moderate to severe damage. Other instruments (not generally considered to be "brass knuckles" or "metal knuckles" per se) may have spikes, sharp points and cutting edges. These devices come in many variations and are called by a variety of names, including "knuckle knives." By the late 19th century, knuckledusters were incorporated into various kinds of pistols such as the Apache revolver used by criminals in France in the late 19th to early 20th centuries. During World War I the US Army issued two different knuckle knives, the US model 1917 and US model 1918 Mark I trench knives. Knuckles and knuckle knives were also being made in England at the time and purchased privately by British soldiers. It was advised not to polish brass knuckles as allowing the brass to darken would act as camouflage on the battlefield. By World War II, knuckles and knuckle knives were quite popular with both American and British soldiers. The Model 1918 trench knives were reissued to American paratroopers. A notable knuckle knife still in use is the Cuchillo de Paracaidista, issued to Argentinian paratroopers. Current-issue models have an emergency blade in the crossguard. Legality and distribution Brass knuckles are illegal in several countries, including: Hong Kong, Austria, Belgium, Canada, Denmark, Bosnia, Croatia, Estonia, Cyprus, Finland, France, Germany, Greece, Hungary, Israel, Ireland, Malaysia, the Netherlands, Norway, Poland, Portugal, Russia, Spain, Turkey, Sweden, Singapore, Taiwan, Ukraine, the United Arab Emirates and the United Kingdom. Import of brass knuckles into Australia is illegal unless a government permit is obtained; permits are available for only limited purposes, such as police and government use, or use in film productions. They are prohibited weapons in the state of New South Wales. In Brazil, brass knuckles are legal and freely sold. They are called , which means 'English punch', or , which means 'puncher'. In Canada, brass knuckles (Canadian French , which literally means 'American fist'), or any similar devices made of metal, are listed as prohibited weapons; possession of such weapon is a criminal offence under the Criminal Code. Plastic knuckles have been determined to be legal in Canada. In France, brass knuckles are illegal. They can be bought as a "collectable" (provided one is over 18), but it is forbidden to carry or use one, whatever the circumstance, including self-defense. The French term is , which literally means 'American punch'. In Russia, brass knuckles were illegal to purchase or own during Imperial times and are still forbidden according to Article 6 of the 1996 Federal Law on Weapons. They are called (from French , literally 'head breaker'). In Serbia, brass knuckles are legal to purchase and own (for people over 16 years old) but are not legal to carry in public. They are called , literally 'boxer'. In Taiwan, according to the Law of the Republic of China, possession and sales of brass knuckles are illegal. Under the regulation, brass knuckles are considered weapons. Without the permission of the central regulatory agency, it is against the law to manufacture, sell, transport, transfer, rent, or have them in any collection or on display. In China, brass knuckles are completely legal as per the Law of the Republic of China. According to Article 32 of the "Public Security Administration Punishment Law of the People's Republic of China", citizens can legally own them for self-defense, but they are prohibited items in certain places. For example, brass knuckles are not allowed to be carried when travelling on the subway, buses, trains, or other public transport. In ancient China, brass knuckles were popular, and were used regularly as a concealed weapon or self-defense tool. In the United States, brass knuckles are not prohibited at the federal level, but various state, county and city laws, and the District of Columbia, regulate or prohibit their purchase and/or possession. , brass knuckles are prohibited in 21 states. Some state laws require purchasers to be 18 or older. Most states have statutes regulating the carrying of weapons, and some specifically prohibit brass knuckles or "metal knuckles". Brass knuckles can readily be purchased online or, where legal, at flea markets, swap meets, gun shows, and at specialty stores. Some companies manufacture belt buckles or novelty paper weights that function as brass knuckles. Brass knuckles made of plastic, rather than metal, have been marketed as "undetectable by airport metal detectors". Some states that ban metal knuckles also ban plastic knuckles. For example, New York's criminal statutes list both "metal knuckles" and "plastic knuckles" as prohibited weapons, but do not define either.
Technology
Melee weapons
null
599215
https://en.wikipedia.org/wiki/Synchrotron
Synchrotron
A synchrotron is a particular type of cyclic particle accelerator, descended from the cyclotron, in which the accelerating particle beam travels around a fixed closed-loop path. The strength of the magnetic field which bends the particle beam into its closed path increases with time during the accelerating process, being synchronized to the increasing kinetic energy of the particles. The synchrotron is one of the first accelerator concepts to enable the construction of large-scale facilities, since bending, beam focusing and acceleration can be separated into different components. The most powerful modern particle accelerators use versions of the synchrotron design. The largest synchrotron-type accelerator, also the largest particle accelerator in the world, is the Large Hadron Collider (LHC) near Geneva, Switzerland, built in 2008 by the European Organization for Nuclear Research (CERN). It can accelerate beams of protons to an energy of 7 tera electronvolts (TeV or 1012 eV). The synchrotron principle was invented by Vladimir Veksler in 1944. Edwin McMillan constructed the first electron synchrotron in 1945, arriving at the idea independently, having missed Veksler's publication (which was only available in a Soviet journal, although in English). The first proton synchrotron was designed by Sir Marcus Oliphant and built in 1952. Types Large synchrotrons usually have a linear accelerator (linac) to give the particles an initial acceleration, and a lower energy synchrotron which is sometimes called a booster to increase the energy of the particles before they are injected into the high energy synchrotron ring. Several specialized types of synchrotron machines are used today: A collider is a type in which, instead of the particles striking a stationary target, particles traveling in two countercirculating rings collide head-on, making higher-energy collisions possible. A storage ring is a special type of synchrotron in which the kinetic energy of the particles is kept constant. A synchrotron light source is a combination of different electron accelerator types, including a storage ring in which the desired electromagnetic radiation is generated. This radiation is then used in experimental stations located on different beamlines. Synchrotron light sources in their entirety are sometimes called "synchrotrons", although this is technically incorrect. Principle of operation The synchrotron evolved from the cyclotron, the first cyclic particle accelerator. While a classical cyclotron uses both a constant guiding magnetic field and a constant-frequency electromagnetic field (and is working in classical approximation), its successor, the isochronous cyclotron, works by local variations of the guiding magnetic field, adapting to the increasing relativistic mass of particles during acceleration. In a synchrotron, this adaptation is done by variation of the magnetic field strength in time, rather than in space. For particles that are not close to the speed of light, the frequency of the applied electromagnetic field may also change to follow their non-constant circulation time. By increasing these parameters accordingly as the particles gain energy, their circulation path can be held constant as they are accelerated. This allows the vacuum chamber for the particles to be a large thin torus, rather than a disk as in previous, compact accelerator designs. Also, the thin profile of the vacuum chamber allowed for a more efficient use of magnetic fields than in a cyclotron, enabling the cost-effective construction of larger synchrotrons. While the first synchrotrons and storage rings like the Cosmotron and ADA strictly used the toroid shape, the strong focusing principle independently discovered by Ernest Courant et al. and Nicholas Christofilos allowed the complete separation of the accelerator into components with specialized functions along the particle path, shaping the path into a round-cornered polygon. Some important components are given by radio frequency cavities for direct acceleration, dipole magnets (bending magnets) for deflection of particles (to close the path), and quadrupole / sextupole magnets for beam focusing. The combination of time-dependent guiding magnetic fields and the strong focusing principle enabled the design and operation of modern large-scale accelerator facilities like colliders and synchrotron light sources. The straight sections along the closed path in such facilities are not only required for radio frequency cavities, but also for particle detectors (in colliders) and photon generation devices such as wigglers and undulators (in third generation synchrotron light sources). The maximum energy that a cyclic accelerator can impart is typically limited by the maximum strength of the magnetic fields and the minimum radius (maximum curvature) of the particle path. Thus one method for increasing the energy limit is to use superconducting magnets, these not being limited by magnetic saturation. Electron/positron accelerators may also be limited by the emission of synchrotron radiation, resulting in a partial loss of the particle beam's kinetic energy. The limiting beam energy is reached when the energy lost to the lateral acceleration required to maintain the beam path in a circle equals the energy added each cycle. More powerful accelerators are built by using large radius paths and by using more numerous and more powerful microwave cavities. Lighter particles (such as electrons) lose a larger fraction of their energy when deflected. Practically speaking, the energy of electron/positron accelerators is limited by this radiation loss, while this does not play a significant role in the dynamics of proton or ion accelerators. The energy of such accelerators is limited strictly by the strength of magnets and by the cost. Injection procedure Unlike in a cyclotron, synchrotrons are unable to accelerate particles from zero kinetic energy; one of the obvious reasons for this is that its closed particle path would be cut by a device that emits particles. Thus, schemes were developed to inject pre-accelerated particle beams into a synchrotron. The pre-acceleration can be realized by a chain of other accelerator structures like a linac, a microtron or another synchrotron; all of these in turn need to be fed by a particle source comprising a simple high voltage power supply, typically a Cockcroft-Walton generator. Starting from an appropriate initial value determined by the injection energy, the field strength of the dipole magnets is then increased. If the high energy particles are emitted at the end of the acceleration procedure, e.g. to a target or to another accelerator, the field strength is again decreased to injection level, starting a new injection cycle. Depending on the method of magnet control used, the time interval for one cycle can vary substantially between different installations. In large-scale facilities One of the early large synchrotrons, now retired, is the Bevatron, constructed in 1950 at the Lawrence Berkeley Laboratory. The name of this proton accelerator comes from its power, in the range of 6.3 GeV (then called BeV for billion electron volts; the name predates the adoption of the SI prefix giga-). A number of transuranium elements, unseen in the natural world, were first created with this machine. This site is also the location of one of the first large bubble chambers used to examine the results of the atomic collisions produced here. Another early large synchrotron is the Cosmotron built at Brookhaven National Laboratory which reached 3.3 GeV in 1953. Among the few synchrotrons around the world, 16 are located in the United States. Many of them belong to national laboratories; few are located in universities. As part of colliders Until August 2008, the highest energy collider in the world was the Tevatron, at the Fermi National Accelerator Laboratory, in the United States. It accelerated protons and antiprotons to slightly less than 1 TeV of kinetic energy and collided them together. The Large Hadron Collider (LHC), which has been built at the European Laboratory for High Energy Physics (CERN), has roughly seven times this energy (so proton-proton collisions occur at roughly 14 TeV). It is housed in the 27 km tunnel which formerly housed the Large Electron Positron (LEP) collider, so it will maintain the claim as the largest scientific device ever built. The LHC will also accelerate heavy ions (such as lead) up to an energy of 1.15 PeV. The largest device of this type seriously proposed was the Superconducting Super Collider (SSC), which was to be built in the United States. This design, like others, used superconducting magnets which allow more intense magnetic fields to be created without the limitations of core saturation. While construction was begun, the project was cancelled in 1994, citing excessive budget overruns — this was due to naïve cost estimation and economic management issues rather than any basic engineering flaws. It can also be argued that the end of the Cold War resulted in a change of scientific funding priorities that contributed to its ultimate cancellation. However, the tunnel built for its placement still remains, although empty. While there is still potential for yet more powerful proton and heavy particle cyclic accelerators, it appears that the next step up in electron beam energy must avoid losses due to synchrotron radiation. This will require a return to the linear accelerator, but with devices significantly longer than those currently in use. There is at present a major effort to design and build the International Linear Collider (ILC), which will consist of two opposing linear accelerators, one for electrons and one for positrons. These will collide at a total center of mass energy of 0.5 TeV. As part of synchrotron light sources Synchrotron radiation also has a wide range of applications (see synchrotron light) and many 2nd and 3rd generation synchrotrons have been built especially to harness it. The largest of those 3rd generation synchrotron light sources are the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, the Advanced Photon Source (APS) near Chicago, United States, and SPring-8 in Japan, accelerating electrons up to 6, 7 and 8 GeV, respectively. Synchrotrons which are useful for cutting edge research are large machines, costing tens or hundreds of millions of dollars to construct, and each beamline (there may be 20 to 50 at a large synchrotron) costs another two or three million dollars on average. These installations are mostly built by the science funding agencies of governments of developed countries, or by collaborations between several countries in a region, and operated as infrastructure facilities available to scientists from universities and research organisations throughout the country, region, or world. More compact models, however, have been developed, such as the Compact Light Source. Applications Life sciences: protein and large-molecule crystallography LIGA based microfabrication Drug discovery and research X-ray lithography X-ray microtomography Analysing chemicals to determine their composition Observing the reaction of living cells to drugs Inorganic material crystallography and microanalysis Fluorescence studies Semiconductor material analysis and structural studies Geological material analysis Medical imaging Particle therapy to treat some forms of cancer Radiometry: calibration of detectors and radiometric standards
Physical sciences
Devices
Physics
599305
https://en.wikipedia.org/wiki/Vitriol
Vitriol
Vitriol is the general chemical name encompassing a class of chemical compounds comprising sulfates of certain metalsoriginally, iron or copper. Those mineral substances were distinguished by their color, such as green vitriol for hydrated iron(II) sulfate and blue vitriol for hydrated copper(II) sulfate. These materials were found originally as crystals formed by evaporation of groundwater that percolated through sulfide minerals and collected in pools on the floors of old mines. The word vitriol comes from the Latin word vitriolus, meaning "small glass", as those crystals resembled small pieces of colored glass. Oil of vitriol was an old name for concentrated sulfuric acid, which was historically obtained through the dry distillation (pyrolysis) of vitriols. The name, abbreviated to vitriol, continued to be used for this viscous liquid long after the minerals came to be termed "sulfates". The figurative term vitriolic in the sense of "harshly condemnatory" is derived from the corrosive nature of this substance. History The study of vitriol began during ancient times. Sumerians had a list of types of vitriol that they classified according to the substances' color. Some of the earliest discussions of the origin and properties of vitriol is in the works of the Greek physician Dioscorides (first century AD) and the Roman naturalist Pliny the Elder (23–79 AD). Galen also discussed its medical use. Metallurgical uses for vitriolic substances were recorded in the Hellenistic alchemical works of Zosimos of Panopolis, in the treatise Phisica et Mystica, and the Leyden papyrus X. Medieval Islamic chemists like Jābir ibn Ḥayyān (died c. 806–816 AD, known in Latin as Geber), Abū Bakr al-Rāzī (865–925 AD, known in Latin as Rhazes), Ibn Sina (980–1037 AD, known in Latin as Avicenna), and Muḥammad ibn Ibrāhīm al-Watwat (1234–1318 AD) included vitriol in their mineral classification lists. Sulfuric acid was termed "oil of vitriol" by medieval European alchemists because it was prepared by roasting "green vitriol" (iron(II) sulfate) in an iron retort. The first vague allusions to it appear in the works of Vincent of Beauvais, in the Compositum de Compositis ascribed to Saint Albertus Magnus, and in pseudo-Geber's Summa perfectionis (all thirteenth century AD).
Physical sciences
Sulfuric oxyanions
Chemistry
599674
https://en.wikipedia.org/wiki/Antheridium
Antheridium
An antheridium is a haploid structure or organ producing and containing male gametes (called antherozoids or sperm). The plural form is antheridia, and a structure containing one or more antheridia is called an androecium. The androecium is also the collective term for the stamens of flowering plants. Antheridia are present in the gametophyte phase of cryptogams like bryophytes and ferns. Many algae and some fungi, for example, ascomycetes and water moulds, also have antheridia during their reproductive stages. In gymnosperms and angiosperms, the male gametophytes have been reduced to pollen grains, and in most of these, the antheridia have been reduced to a single generative cell within the pollen grain. During pollination, this generative cell divides and gives rise to sperm cells. The female counterpart to the antheridium in cryptogams is the archegonium, and in flowering plants is the gynoecium. An antheridium typically consists of sterile cells and spermatogenous tissue. The sterile cells may form a central support structure or surround the spermatogenous tissue as a protective jacket. The spermatogenous cells give rise to spermatids via mitotic cell division. In some bryophytes, the antheridium is borne on an antheridiophore, a stalk-like structure that carries the antheridium at its apex. Gallery
Biology and health sciences
Plant reproduction
Biology
13439463
https://en.wikipedia.org/wiki/Cleavage%20%28geology%29
Cleavage (geology)
Cleavage, in structural geology and petrology, describes a type of planar rock feature that develops as a result of deformation and metamorphism. The degree of deformation and metamorphism along with rock type determines the kind of cleavage feature that develops. Generally, these structures are formed in fine grained rocks composed of minerals affected by pressure solution. Cleavage is a type of rock foliation, a fabric element that describes the way planar features develop in a rock. Foliation is separated into two groups: primary and secondary. Primary deals with igneous and sedimentary rocks, while secondary deals with rocks that undergo metamorphism as a result of deformation. Cleavage is a type of secondary foliation associated with fine grained rocks. For coarser grained rocks, schistosity is used to describe secondary foliation. There are a variety of definitions for cleavage, which may cause confusion and debate. The terminology used in this article is based largely on Passchier and Trouw (2005). They state that cleavage is a type of secondary foliation in fine grained rocks characterized by planar fabric elements that form in a preferred orientation. Some authors choose to use cleavage when describing any form of secondary foliation. Types of cleavage The presence of fabric elements such as preferred orientation of platy or elongate minerals, compositional layering, grain size variations, etc. determines what type of cleavage forms. Cleavage is categorized as either continuous or spaced. Continuous cleavage Continuous or penetrative cleavage describes fine grained rocks consisting of platy minerals evenly distributed in a preferred orientation. The type of continuous cleavage that forms depends on the minerals present. Undeformed platy minerals such as micas and amphiboles align in a preferred orientation, and minerals such as quartz or calcite deform into a grain shape preferred orientation. Continuous cleavage is scale dependent, so a rock with a continuous cleavage on a microscopic level could show signs of spaced cleavage when observed on a macroscopic level. Slaty cleavage Since the nature of cleavage is dependent on scale, slaty cleavage is defined as having 0.01 mm or less of space occurring between layers. Slaty cleavage often occurs after diagenesis and is the first cleavage feature to form after deformation begins. The tectonic strain must be enough to allow a new strong foliation to form, i.e. slaty cleavage. Spaced cleavage Spaced cleavage occurs in rocks with minerals that are not evenly distributed, and as a result the rock forms discontinuous layers or lenses of different types of minerals. Spaced cleavage contains two types of domains; cleavage domains and microlithons. Cleavage domains are planar boundaries subparallel to the trend of the domain, and microlithons are bounded by the cleavage domains. Spaced cleavages can be categorized based on whether the grains inside the microlithons are randomly oriented or contain microfolds from a previous foliation fabric. Other descriptions for spaced cleavages include the spacing size, the shape and percentage of cleavage domains, and the transition between cleavage domains and microlithons. Crenulation cleavage Crenulation cleavage contains microlithons that were warped by a previous foliation. Folding occurs when there are multiple phases of deformation, the latter one causes symmetric or asymmetric microfolds that deform previous foliations. The type of crenulation cleavage pattern that forms depends on lithology and degree of deformation and metamorphism. Disjunctive cleavage Disjunctive cleavage describes a type of spaced cleavage where the microlithons are not deformed into microfolds, and formation is independent from any previous foliation present in the rock. A common outdated term for disjunctive cleavage is fracture cleavage. It is recommended that this term be avoided because of the tendency to misinterpret the formation of a cleavage feature. Transposition cleavage When an older cleavage foliation is erased and replaced by a younger foliation due to stronger deformation and is evidence for multiple deformation events. Formation The development of cleavage foliation involves a combination of various mechanisms dependent on the rocks composition, tectonic processes, and metamorphic conditions. The magnitude and orientation of stress coupled with pressure and temperature conditions determine how a mineral is deformed. Cleavages form approximately parallel to the X-Y plane of tectonic strain and are categorized based on the type of strain. The mechanisms currently believed to control cleavage formation are rotation of mineral grains, solution transfer, dynamic recrystallization, and static recrystallization. Mechanical rotation of grains During ductile deformation, mineral grains with a high aspect ratio are likely to rotate so that their mean orientation is in the same direction as the XY plane of finite strain. Mineral grains may fold if oriented perpendicular to shortening direction. Solution transfer Cleavage foliations may result due to stress-induced solution transfer by the redistribution of inequant mineral grains by pressure solution and recrystallization. This would also help to increase rotation of elongate and tabular mineral grains. Mica grains undergoing solution transfer will align in a preferred orientation. If the minerals grains affected by pressure solution are deformed through plastic crystal processes, the grain will be extended along the XY-plane of finite strain. This process shapes grains into a preferred orientation. Dynamic recrystallization Dynamic recrystallization occurs when a rock undergoes metamorphic conditions and reequilibrium of a minerals chemical composition. This happens when there is a decrease in free energy stored in deformed grains. Deformed micas can store a sufficient amount of strain energy that can allow recrystallization to occur. This process allows oriented regrowth of both old and new minerals into the damaged crystal lattice during cleavage development. Static recrystallization This process occurs either after deformation or in the absence of dynamic deformation. Depending on the intensity of heat during recrystallization, the foliation will either be strengthened or weakened. If the heat is too intense, foliation will be weakened due to the nucleation and growth of new randomly oriented crystals and the rock will become a hornfels. If minimal heat is applied to a rock with a preexisting foliation and without a change in mineral assemblage, the cleavage will be strengthened by growth of micas parallel to foliation. Relationship to folds Cleavages display a measurable geometric relationship with the axial plane of folds developed during deformation and are referred to as axial planar foliations. The foliations are symmetrically arranged with respect to the axial plane, depending on the composition and competency of a rock. For example, when mixed sandstone and mudstone sequences are folded during very-low to low grade metamorphism, cleavage forms parallel to the fold axial plane, particularly in the clay-rich parts of the sequence. In folded alternations of sandstone and mudstone the cleavage has a fan-like arrangement, divergent in the mudstone layers and convergent in the sandstones. This is thought to be because the folding is controlled by buckling of the stronger sandstone beds with the weaker mudstones deforming to fill the intervening gaps. The result is a feature referred to as foliation fanning. Engineering considerations In geotechnical engineering a cleavage plane forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of rock masses in, for example, tunnel, foundation, or slope construction.
Physical sciences
Structural geology
Earth science
13440591
https://en.wikipedia.org/wiki/Onium
Onium
An onium (plural: onia) is a bound state of a particle and its antiparticle. These states are usually named by adding the suffix -onium to the name of one of the constituent particles (replacing an -on suffix when present), with one exception for "muonium"; a muon–antimuon bound pair is called "true muonium" to avoid confusion with old nomenclature. Examples Positronium is an onium which consists of an electron and a positron bound together as a long-lived metastable state. Positronium has been studied since the 1950s to understand bound states in quantum field theory. A recent development called non-relativistic quantum electrodynamics (NRQED) used this system as a proving ground. Pionium, a bound state of two oppositely-charged pions, is interesting for exploring the strong interaction. This should also be true of protonium. The true analogs of positronium in the theory of strong interactions are the quarkonium states: they are mesons made of a heavy quark and antiquark (namely, charmonium and bottomonium). Exploration of these states through non-relativistic quantum chromodynamics (NRQCD) and lattice QCD are increasingly important tests of quantum chromodynamics. Understanding bound states of hadrons such as pionium and protonium is also important in order to clarify notions related to exotic hadrons such as mesonic molecules and pentaquark states.
Physical sciences
Atomic physics
Physics
5571489
https://en.wikipedia.org/wiki/Instability%20strip
Instability strip
The unqualified term instability strip usually refers to a region of the Hertzsprung–Russell diagram largely occupied by several related classes of pulsating variable stars: Delta Scuti variables, SX Phoenicis variables, and rapidly oscillating Ap stars (roAps) near the main sequence; RR Lyrae variables where it intersects the horizontal branch; and the Cepheid variables where it crosses the supergiants. RV Tauri variables are also often considered to lie on the instability strip, occupying the area to the right of the brighter Cepheids (at lower temperatures), since their stellar pulsations are attributed to the same mechanism. Position on the HR diagram The Hertzsprung–Russell diagram plots the real luminosity of stars against their effective temperature (their color, given by the temperature of their photosphere). The instability strip intersects the main sequence, (the prominent diagonal band that runs from the upper left to the lower right) in the region of A and F stars (1–2 solar mass ()) and extends to G and early K bright supergiants (early M if RV Tauri stars at minimum are included). Above the main sequence, the vast majority of stars in the instability strip are variable. Where the instability strip intersects the main sequence, the vast majority of stars are stable, but there are some variables, including the roAp stars and the Delta Scuti variables. Pulsations Stars in the instability strip pulsate due to He III (doubly ionized helium), in a process based on the Kappa–mechanism. In normal A-F-G class stars, He in the stellar photosphere is neutral. Deeper below the photosphere, where the temperature reaches 25,000–, begins the He II layer (first He ionization). Second ionization of helium (He III) starts at depths where the temperature is 35,000–. When the star contracts, the density and temperature of the He II layer increases. The increased energy is sufficient to remove the lone remaining electron in the He II, transforming it into He III (second ionization). This causes the opacity of the He layer to increase and the energy flux from the interior of the star is effectively absorbed. The temperature of the star's core increases, which causes it to expand. After expansion, the He III cools and begins to recombine with free electrons to form He II and the opacity of the star decreases. This allows the trapped heat to propagate to the surface of the star. When the sufficient energy has been radiated away, overlying the stellar material once again causes the He II layer to contract, and the cycle starts from the beginning. This results in the observed increase and decrease in the surface temperature of the star. In some stars, the pulsations are caused by the opacity peak of metal ions at about . The phase shift between a star's radial pulsations and brightness variations depends on the distance of He II zone from the stellar surface in the stellar atmosphere. For most Cepheids, this creates a distinctly asymmetrical observed light curve, increasing rapidly to maximum and slowly decreasing back down to minimum. Other pulsating stars There are several types of pulsating star not found on the instability strip and with pulsations driven by different mechanisms. At cooler temperatures are the long period variable AGB stars. At hotter temperatures are the Beta Cephei and PV Telescopii variables. Right at the edge of the instability strip near the main sequence are Gamma Doradus variables. The band of White dwarfs has three separate regions and types of variable: DOV, DBV, and DAV (= ZZ Ceti variables) white dwarfs. Each of these types of pulsating variable has an associated instability strip created by variable opacity partial ionisation regions other than helium. Most high luminosity supergiants are somewhat variable, including the Alpha Cygni variables. In the specific region of more luminous stars above the instability strip are found the yellow hypergiants which have irregular pulsations and eruptions. The hotter luminous blue variables may be related and show similar short- and long-term spectral and brightness variations with irregular eruptions.
Physical sciences
Stellar astronomy
Astronomy
5575197
https://en.wikipedia.org/wiki/Gallicolumba
Gallicolumba
Gallicolumba is a mid-sized genus of ground-dwelling doves (family Columbidae) which occur in rainforests on the Philippines. Local name 'punay' which is a general term for pigeons and doves. They are not closely related to the American ground doves genus (Columbina and related genera). Rather, the present genus is closest to the thick-billed ground pigeon. This genus includes the bleeding-hearts known from the Philippines. Most are named for their vivid-red patch on the breast, which looks startlingly like a bleeding wound in some species and has reminded naturalists of a dagger stab. The diet of doves of this genus consists of fruits and seed. Systematics and extinctions Gallicolumba might be ranked as a (very small) subfamily, but the available data suggests that they are better considered part of a quite basal radiation of Columbidae which consists of many small and often bizarre lineages (e.g. Goura and Otidiphaps which are ecologically convergent to Galliformes, and maybe even the famous didines (Raphinae). The genus contains seven species: Sulawesi ground dove, Gallicolumba tristigmata Cinnamon ground dove, Gallicolumba rufigula Luzon bleeding-heart, Gallicolumba luzonica Mindanao bleeding-heart, Gallicolumba crinigera Mindoro bleeding-heart, Gallicolumba platenae Negros bleeding-heart, Gallicolumba keayi Sulu bleeding-heart, Gallicolumba menagei - possibly extinct (late 1990s?) Many of the Pacific ground doves were removed from Gallicolumba (which was non-monophyletic) and reassigned to the genus Alopecoenas, which was later renamed Pampusana.
Biology and health sciences
Columbimorphae
Animals
4167941
https://en.wikipedia.org/wiki/Pigeonite
Pigeonite
Pigeonite is a mineral in the clinopyroxene subgroup of the pyroxene group. It has a general formula of . The calcium cation fraction can vary from 5% to 25%, with iron and magnesium making up the rest of the cations. Pigeonite crystallizes in the monoclinic system, as does augite, and a miscibility gap exists between the two minerals. At lower temperatures, pigeonite is unstable relative to augite plus orthopyroxene. The low-temperature limit of pigeonite stability depends upon the Fe/Mg ratio in the mineral and is hotter for more Mg-rich compositions; for a Fe/Mg ratio of about 1, the temperature is about 900 °C. The presence of pigeonite in an igneous rock thus provides evidence for the crystallization temperature of the magma, and hence indirectly for the water content of that magma. Pigeonite is found as phenocrysts in volcanic rocks on Earth and as crystals in meteorites from Mars and the Moon. In slowly cooled intrusive igneous rocks, pigeonite is rarely preserved. Slow cooling gives the calcium the necessary time to separate itself from the structure to form exsolution lamellae of calcic clinopyroxene, leaving no pigeonite present. Textural evidence of its breakdown to orthopyroxene plus augite may be present, as shown in the accompanying microscopic image. Pigeonite is named for its type locality on Lake Superior's shores at Pigeon Point, Minnesota, United States. It was first described in 1900.
Physical sciences
Silicate minerals
Earth science
4169991
https://en.wikipedia.org/wiki/Jasminum%20sambac
Jasminum sambac
Jasminum sambac (Arabian jasmine or Sambac jasmine) is a species of jasmine with a native range from Bhutan to India It is cultivated in many places, especially West Asia, South Asia and Southeast Asia. It is naturalised in many scattered locales: Mauritius, Madagascar, the Maldives, Christmas Island, Chiapas, Central America, southern Florida, the Bahamas, Cuba, Hispaniola, Jamaica, Puerto Rico, and the Lesser Antilles. Jasminum sambac is a small shrub or vine growing up to in height. It is widely cultivated for its attractive and sweetly fragrant flowers. The flowers may be used as a fragrant ingredient in perfumes and jasmine tea. It is the national flower of the Philippines, where it is known as sampaguita, as well as being one of the three national flowers of Indonesia, where it is known as melati putih. Description Jasminum sambac is an evergreen vine or shrub reaching up to tall. The species is highly variable, possibly a result of spontaneous mutation, natural hybridization, and autopolyploidy. Cultivated Jasminum sambac generally do not bear seeds and the plant is reproduced solely by cuttings, layering, marcotting, and other methods of asexual propagation. The leaves are ovate, long and wide. The phyllotaxy is opposite or in whorls of three, simple (not pinnate, like most other jasmines). They are smooth (glabrous) except for a few hairs at the venation on the base of the leaf. The flowers bloom all throughout the year and are produced in clusters of 3 to 12 together at the ends of branches. They are strongly scented, with a white corolla in diameter with 5 to 9 lobes. The flowers open at night (usually around 6 to 8 in the evening), and close in the morning, a span of 12 to 20 hours. The fruit is a purple to black berry in diameter. Taxonomy and nomenclature Jasminum sambac is classified under the genus Jasminum under the tribe Jasmineae. It belongs to the olive family Oleaceae. Jasminum sambac has acquired its English common name, "Arabian jasmine," from being widely cultivated in the Arabian peninsula. Early Chinese records of the plant points to it being originated in Southeast Asia. Jasminum sambac (and nine other species of the genus) were spread into Arabia and Persia by man, where they were cultivated in gardens. From there, they were introduced to Europe where they were grown as ornamentals and were known under the common name "sambac" in the 18th century. The Medieval Arabic term "zanbaq" denoted jasmine flower-oil from the flowers of any species of jasmine. This word entered late medieval Latin as "sambacus" and "zambacca" with the same meaning as the Arabic, and then in post-medieval Latin plant taxonomy the word was adopted as a label for the J. sambac species. The J. sambac species is a good source for jasmine flower-oil in terms of the quality of the fragrance and it continues to be cultivated for this purpose for the perfume industry today. The Jasminum officinale species is also cultivated for the same purpose, and probably to a greater extent. In 1753, Carl Linnaeus first described the plant as Nyctanthes sambac in the first edition of his famous book Systema Naturae. In 1789, William Aiton reclassified the plant to the genus Jasminum. He also coined the common English name of "Arabian jasmine". Cultivation The sweet, heady fragrance of Jasminum sambac is its distinct feature. It is widely grown throughout the tropics from the Arabian Peninsula to Southeast Asia and the Pacific Islands as an ornamental plant and for its strongly scented flowers. Numerous cultivars currently exist. Typically, the flowers are harvested as buds during early morning. The flower buds are harvested on basis of color, as firmness and size are variable depending on the weather. The buds have to be white, as green ones may not emit the characteristic fragrance they are known for. Open flowers are generally not harvested as a larger amount of them is needed to extract oils and they lose their fragrance sooner. J. sambac does not tolerate being frozen, so in temperate regions must be grown under glass, in an unheated greenhouse or conservatory. It has an intense fragrance which some people may find overpowering. In the UK this plant has gained the Royal Horticultural Society's Award of Garden Merit. Cultivars {{multiple image | align = right | direction = horizontal | header =Jasminum sambac cultivars| header_align = center | width = | header_background =lightgreen | background color = | image1 =Jasminum sambac.jpg | width1 =140 | alt1 = | caption1 ='Maid of Orleans' | image2 =Jasminum sambac 'Grand Duke of Tuscany'.jpg | width2 =140 | alt2 = | caption2 ='Grand Duke of Tuscany' }} There are numerous cultivars of Jasminum sambac which differ from each other by the shape of leaves and the structure of the corolla. The cultivars recognized include: 'Maid of Orleans' – possesses flowers with a single layer of five or more oval shaped petals. It is the variety most commonly referred to as sampaguita and pikake. It is also known as 'Mograw', 'Motiya', or 'Bela'. 'Belle of India' – possesses flowers with a single or double layer of elongated petals. 'Grand Duke of Tuscany' – possesses flowers with a doubled petal count. They resemble small white roses and are less fragrant than the other varieties. It is also known as 'Rose jasmine' and 'Butt Mograw'. In the Philippines, it is known as kampupot. 'Mysore Mallige' – resembles the 'Belle of India' cultivar but has slightly shorter petals with distinct and immense fragrance. 'Arabian Nights' – possesses a double layer of petals but is smaller in size than the 'Grand Duke of Tuscany' cultivar. Chemical composition Jasminum sambac contains dotriacontanoic acid, dotriacontanol, oleanolic acid, daucosterol, hesperidin, and [+]-jasminoids A, B, C, D in its roots. Leaves contains flavonoids such as rutin, quercetin and isoquercetin, flavonoids rhamnoglycosides as well as α-amyrin and β-sitosterol. A novel plant cysteine-rich peptide family named jasmintides were isolated from this plant. Its aroma is caused by a variety of compounds including benzyl alcohol, tetradecamethylcycloheptasiloxane, methyl benzoate, linalool, benzyl acetate, (-)-(R)-jasmine lactone, (E,E)-α-farnesene, (Z)-3-hexenyl benzoate, N-acetylmethylanthranilate, dodecamethylcyclohexasiloxane, (E)-methyl jasmonate, benzyl benzoate and isophytol. Importance Southeast Asia Philippines Jasminum sambac (Filipino and Philippine Spanish: sampaguita) was adopted by the Philippines as the national flower on 1 February 1934 via Proclamation No. 652 issued by American Governor-General Frank Murphy. Its most widespread modern common name "sampaguita" is derived from the Philippine Spanish sampaguita; from Tagalog sampaga ("jasmine", a direct loanword from the Indian sanskrit word campaka), and the Spanish diminutive suffix -ita. It is also by native common names, including kampupot in Tagalog; kulatai, pongso, or kampupot in Kapampangan; manul in the Visayan languages; lumabi or malul in Maguindanao; and hubar or malur in Tausug. Filipinos string the flowers into leis, corsages, and sometimes crowns. These garlands are available as loose strings of blossoms or as tight clusters of buds, and are commonly sold by vendors outside churches and near street intersections. Sampaguita garlands are used as a form of bestowing honour, veneration, or accolade. These are primarily used to adorn religious images, religious processions and photographs of the dead on altars. These are placed around the necks of living persons such as dignitaries, visitors, and occasionally to graduating students. Buds strung into ropes several metres long are often used to decorate formal events such state occasions at Malacañang Palace, weddings, and are sometimes used as the ribbon in ribbon cutting ceremonies. Though edible, the flower is rarely used in cuisine, with an unusual example being flavouring for ice cream. Jasminum sambac is the subject of the danza song La Flor de Manila, composed by Dolores Paterno in 1879. The song was popular during the Commonwealth and is now regarded as a romantic classic. The flower is also the namesake of the song El Collar de Sampaguita. The design of the ceremonial torch for the 2019 Southeast Asian Games, designed by Filipino sculptor Daniel Dela Cruz, was inspired by the sampaguita. Indonesia Jasminum sambac () is one of the three national flowers in Indonesia, the other two being the moon orchid and the giant padma. Although the official adoption were announced only as recent as 1990 during World Environment Day and enforced by law through Presidential Decree No. 4 in 1993, the importance of Jasminum sambac in Indonesian culture long predates its official adoption. Since the formation of Indonesian republic during the reign of Sukarno, melati putih is always unofficially recognized as the national flower of Indonesia. The reverence and its elevated status mostly due to the importance of this flower in Indonesian tradition since ancient times. It has long been considered a sacred flower in Indonesian tradition, as it symbolizes purity, sacredness, and sincerity. It also represents the beauty of modesty; a small and simple white flower that can produce such sweet fragrance. It is also the most prevalent flower in wedding ceremonies for ethnic Indonesians, especially in the island of Java. Jasmine flower buds that have not fully opened are usually picked to create strings of jasmine garlands (). On wedding days, a traditional Javanese or Sundanese bride's hair is adorned with strings of jasmine garlands arranged as a hairnet to cover the konde (hair bun). The intricately intertwined strings of jasmine garlands are left to hang loose from the bride's head. The groom's kris is also adorned with five jasmine garlands called roncen usus-usus (intestine garlands) to refer its intestine-like form and also linked to the legend of Arya Penangsang. In Makassar and Bugis brides, the hair is also adorned with buds of jasmine that resemble pearls. Jasmine is also used as floral offerings for hyangs, spirits and deities especially among Balinese Hindu, and also often present during funerals. In South Sumatran traditional costume, the bungo melati pattern in Palembang songket fabrics depicts the jasmine to represent beauty and femininity. The jasmine symbolizes a wide variety of things in Indonesian traditions; it is the flower of life, beauty and festive wedding, yet it is also often associated with spirits and death; the sudden scent of jasmine is often an ominous sign for the superstitious, as it may herald the presence of a ghost or jinn. In Indonesian patriotic songs and poems, the fallen melati is often the representation of fallen heroes that sacrificed their lives and died for the country, a very similar concept to fallen sakura that represents fallen heroes in Japanese tradition. Ismail Marzuki's patriotic song "Melati di Tapal Batas" (jasmine on the border) (1947) and Guruh Sukarnoputra's "Melati Suci" (sacred jasmine) (1974) clearly refer jasmine as the representation of fallen heroes, the eternally fragrant flower that adorned Ibu Pertiwi (Indonesian national personification). Iwan Abdurachman's "Melati Dari Jayagiri" (jasmine from Jayagiri mountain) refers to jasmine as the representation of the pure unspoiled beauty of a girl and also a long-lost love. In Indonesia, essential oils are extracted from jasmine flowers and buds by using the steam distillation process. Jasmine essential oil is one of the most expensive commodities in the aromatherapy and perfume industry. Cambodia In Cambodia, the flower is used as an offering to the Buddha. During flowering season which begins in June, Cambodians thread the flower buds onto a wooden needle to be presented to the Buddha. Thailand In Thailand, this flower is often strung into a garland for offerings to Buddha. Its name is called in Thai as "mali la" () or "mali son" (). Their names are referenced in central folk songs, until it is widely known and popular. It has been adapted into a sports song. In addition, the flower is also used as a symbol on Mother's Day in Thailand as well which falls on August 12, birthday of Queen Sirikit. East Asia China In China, the flower () is processed and used as the main flavoring ingredient in jasmine tea (茉莉花茶). It is also the subject of a popular folk song Mo Li Hua. Hawaii In Hawaii, the flower is known as pīkake, and is used to make fragrant leis. The name 'pīkake' is derived from the Hawaiian word for "peacock", because the Hawaiian Princess Kaʻiulani was fond of both the flowers and the bird. The Middle East In Oman, Jasminum sambac features prominently on a child's first birthday. They are used to make thick garlands used as hair adornments. Flowers are sprinkled on the child's head by other children while chanting "hol hol". The fragrant flowers are also sold packed in between large leaves of the Indian almond (Terminalia catappa) and sewn together with strips of date palm leaves. In Bahrain The flower is made into a pin along with the leaf of a palm tree to commemorate the martyrs of the country, similar to the White Poppy flower. India Jasmine is considered to be a sacred flower in Hinduism. It is one of the most commonly grown ornamentals in India, Bangladesh and Pakistan, where it is native. At Indian weddings, the bride often adorns her hair with garlands made of mogra, either around a bun or wrapped across a braid. Sri Lanka In Sri Lanka it is widely known as pichcha or gaeta pichcha. The name sithapushpa and katarolu are also used in older texts. The flowers are used in Buddhist temples and in ceremonial garlands. Toxicity The LD50 of jasmine extract is greater than 5 mg/kg by weight.
Biology and health sciences
Lamiales
Plants
4174442
https://en.wikipedia.org/wiki/Nepomorpha
Nepomorpha
Nepomorpha is an infraorder of insects in the "true bug" order (Hemiptera). They belong to the "typical" bugs of the suborder Heteroptera. Due to their aquatic habits, these animals are known as true water bugs. They occur all over the world outside the polar regions, with about 2,000 species altogether. The Nepomorpha can be distinguished from related Heteroptera by their missing or vestigial ocelli. Also, as referred to by the obsolete name Cryptocerata ("the hidden-horned ones"), their antennae are reduced, with weak muscles, and usually carried tucked against the head. Most of the species within this infraorder live in freshwater habitats. The exceptions are members of the superfamily Ochteroidea, which are found along the water's edge. Many of these insects are predators of invertebrates and in some cases – like the large water scorpions (Nepidae) and giant water bugs (Belostomatidae) – even small fish and amphibians. Others are omnivores or feed on plants. Their mouthparts form a rostrum as in all Heteroptera and most Hemiptera. With this, they pierce their food source to suck out fluids; some, like the Corixidae, are also able to chew their food to some extent, sucking up the resulting pulp. The rostrum can also be used to sting in defence; some, like the common backswimmer (Notonecta glauca) of the Notonectidae can easily pierce the skin of humans and deliver a wound often more painful than a bee's sting. Systematics The Nepomorpha probably originated around the start of the Early Triassic, some . As evidenced by fossils such as the rather advanced Triassocoridae or the primitive water boatman Lufengnacta, the radiation establishing today's superfamilies seems to have been largely complete by the end of the Triassic . There are a large number of fossil genera, but except those placed in Triassocoridae they can at least tentatively be assigned to the extant superfamilies. Though the systematics and phylogeny of the higher taxa of Nepomorpha were long controversial, cladistic analysis of mitochondrial 16S and nuclear 28S rDNA sequence data and morphology has more recently resolved to near-perfection. The long-accepted superfamilies are all monophyletic, with the exception of the Naucoroidea, which is now monotypic with the Aphelocheiridae and Potamocoridae being split off in a new superfamily Aphelocheiroidea. The Cibariopectinata, a proposed clade established on the presence of cibariopectine structures in the food-sucking pump of some of the most advanced true water bugs (Tripartita), might indeed be monophyletic. Alternatively it might be synonymous with the Tripartita, the Ochteroidea having lost the cibariopectines again due to the different requirements of their (for Nepomorpha) unusual lifestyle. About seven superfamilies, in evolutionary sequence, from the most ancient to the most modern lineage, have been identified in the Infraorder Nepomorpha: †Morrisonnepa (incertae sedis: Morrison Formation, Tithonian ~ 151 Ma.) Nepoidea Family Belostomatidae – giant water bugs Family Nepidae – water scorpions Corixoidea Family Corixidae – water boatmen Family Micronectidae – pygmy water boatmen Ochteroidea Clade Tripartita Family Gelastocoridae – toad bugs Family Ochteridae – velvety shore bugs Clade Cibariopectinata (disputed) Family Triassocoridae (fossil, tentatively placed here) Aphelocheiroidea Family Aphelocheiridae Family Potamocoridae Naucoroidea Family Naucoridae – creeping water bugs Notonectoidea Family Notonectidae – backswimmers Pleoidea Note: sometimes included in Notonectoidea Family Helotrephidae Family Pleidae – pygmy backswimmers
Biology and health sciences
Hemiptera (true bugs)
Animals
4174517
https://en.wikipedia.org/wiki/Algaculture
Algaculture
Algaculture is a form of aquaculture involving the farming of species of algae. The majority of algae that are intentionally cultivated fall into the category of microalgae (also referred to as phytoplankton, microphytes, or planktonic algae). Macroalgae, commonly known as seaweed, also have many commercial and industrial uses, but due to their size and the specific requirements of the environment in which they need to grow, they do not lend themselves as readily to cultivation (this may change, however, with the advent of newer seaweed cultivators, which are basically algae scrubbers using upflowing air bubbles in small containers, known as tumble culture). Commercial and industrial algae cultivation has numerous uses, including production of nutraceuticals such as omega-3 fatty acids (as algal oil) or natural food colorants and dyes, food, fertilizers, bioplastics, chemical feedstock (raw material), protein-rich animal/aquaculture feed, pharmaceuticals, and algal fuel, and can also be used as a means of pollution control and natural carbon sequestration. Global production of farmed aquatic plants, overwhelmingly dominated by seaweeds, grew in output volume from 13.5 million tonnes in 1995, to just over 30 million tonnes in 2016 and 37.8 million tonnes in 2022. This increase was the result of production expansions led by China, followed by Malaysia, the Philippines, the United Republic of Tanzania, the Russian Federation. Cultured microalgae already contribute to a wide range of sectors in the emerging bioeconomy. Research suggests there are large potentials and benefits of algaculture for the development of a future healthy and sustainable food system. Uses of algae Food Several species of algae are raised for food. While algae have qualities of a sustainable food source, "producing highly digestible proteins, lipids, and carbohydrates, and are rich in essential fatty acids, vitamins, and minerals" and e.g. having a high protein productivity per acre, there are several challenges "between current biomass production and large-scale economic algae production for the food market". Micro-algae can be used to create microbial protein used as a powder or in a variety of products. Purple laver (Porphyra) is perhaps the most widely domesticated marine algae. In Asia it is used in nori (Japan) and gim (Korea). In Wales, it is used in laverbread, a traditional food, and in Ireland it is collected and made into a jelly by stewing or boiling. Preparation also can involve frying or heating the fronds with a little water and beating with a fork to produce a pinkish jelly. Harvesting also occurs along the west coast of North America, and in Hawaii and New Zealand. Algae oil is used as a dietary supplement as the plants also produce Omega-3 (and Omega-6) fatty acids, which are commonly also found in fish oils, and which have been shown to have positive health benefits, including for cognition and against brain aging. Dulse (Palmaria palmata) is a red species sold in Ireland and Atlantic Canada. It is eaten raw, fresh, dried, or cooked like spinach. Spirulina (Arthrospira platensis) is a blue-green microalgae with a long history as a food source in East Africa and pre-colonial Mexico. Spirulina is high in protein and other nutrients, finding use as a food supplement and for malnutrition. Spirulina thrives in open systems and commercial growers have found it well-suited to cultivation. One of the largest production sites is Lake Texcoco in central Mexico. The plants produce a variety of nutrients and high amounts of protein. Spirulina is often used commercially as a nutritional supplement. Chlorella, another popular microalgae, has similar nutrition to spirulina. Chlorella is very popular in Japan. It is also used as a nutritional supplement with possible effects on metabolic rate. Irish moss (Chondrus crispus), often confused with Mastocarpus stellatus, is the source of carrageenan, which is used as a stiffening agent in instant puddings, sauces, and dairy products such as ice cream. Irish moss is also used by beer brewers as a fining agent. Sea lettuce (Ulva lactuca), is used in Scotland, where it is added to soups and salads. Dabberlocks or badderlocks (Alaria esculenta) is eaten either fresh or cooked in Greenland, Iceland, Scotland and Ireland. Aphanizomenon flos-aquae is a cyanobacteria similar to spirulina, which is used as a nutritional supplement. Extracts and oils from algae are also used as additives in various food products. Sargassum species are an important group of seaweeds. These algae have many phlorotannins. Cochayuyo (Durvillaea antarctica) is eaten in salads and ceviche in Peru and Chile. Both microalgae and macroalgae are used to make agar (see below), which is used as a gelling agent in foods. Lab manipulation Australian scientists at Flinders University in Adelaide have been experimenting with using marine microalgae to produce proteins for human consumption, creating products like "caviar", vegan burgers, fake meat, jams and other food spreads. By manipulating microalgae in a laboratory, the protein and other nutrient contents could be increased, and flavours changed to make them more palatable. These foods leave a much lighter carbon footprint than other forms of protein, as the microalgae absorb rather than produce carbon dioxide, which contributes to the greenhouse gases. Fertilizer and agar For centuries seaweed has been used as fertilizer. It is also an excellent source of potassium for manufacture of potash and potassium nitrate. Some types of microalgae can be used this way as well. Both microalgae and macroalgae are used to make agar. Pollution control With concern over global warming, new methods for the thorough and efficient capture of CO2 are being sought out. The carbon dioxide that a carbon-fuel burning plant produces can feed into open or closed algae systems, fixing the CO2 and accelerating algae growth. Untreated sewage can supply additional nutrients, thus turning two pollutants into valuable commodities. Waste high-purity as well as sequestered carbon from the atmosphere can be used, with potential significant benefits for climate change mitigation. Algae cultivation is under study for uranium/plutonium sequestration and purifying fertilizer runoff. Energy production Business, academia and governments are exploring the possibility of using algae to make gasoline, bio-diesel, biogas and other fuels. Algae itself may be used as a biofuel, and additionally be used to create hydrogen. Microalgae are also researched for hydrogen production – e.g. micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors capable of producing oxygen as well as hydrogen via photosynthesis in daylight under air. Microgeneration Carbon sequestration Other uses Chlorella, particularly a transgenic strain which carries an extra mercury reductase gene, has been studied as an agent for environmental remediation due to its ability to reduce to the less toxic elemental mercury. Cultured strains of a common coral microalgal endosymbionts are researched as a potential way to increase corals' thermal tolerance for climate resilience and bleaching tolerance. Cultured microalgae is used in research and development for potential medical applications, in particular for microbots such as biohybrid microswimmers for targeted drug delivery. Cultivated algae serve many other purposes, including cosmetics, animal feed, bioplastic production, dyes and colorant production, chemical feedstock production, and pharmaceutical ingredients. Growing, harvesting, and processing algae Monoculture Most growers prefer monocultural production and go to considerable lengths to maintain the purity of their cultures. However, the microbiological contaminants are still under investigation. With mixed cultures, one species comes to dominate over time and if a non-dominant species is believed to have particular value, it is necessary to obtain pure cultures in order to cultivate this species. Individual species cultures are also much needed for research purposes. A common method of obtaining pure cultures is serial dilution. Cultivators dilute either a wild sample or a lab sample containing the desired algae with filtered water and introduce small aliquots (measures of this solution) into a large number of small growing containers. Dilution follows a microscopic examination of the source culture that predicts that a few of the growing containers contain a single cell of the desired species. Following a suitable period on a light table, cultivators again use the microscope to identify containers to start larger cultures. Another approach is to use a special medium which excludes other organisms, including invasive algae. For example, Dunaliella is a commonly grown genus of microalgae which flourishes in extremely salty water that few other organisms can tolerate. Alternatively, mixed algae cultures can work well for larval mollusks. First, the cultivator filters the sea water to remove algae which are too large for the larvae to eat. Next, the cultivator adds nutrients and possibly aerates the result. After one or two days in a greenhouse or outdoors, the resulting thin soup of mixed algae is ready for the larvae. An advantage of this method is low maintenance. Growing algae Water, carbon dioxide, minerals and light are all important factors in cultivation, and different algae have different requirements. The basic reaction for algae growth in water is carbon dioxide + light energy + water = glucose + oxygen + water. This is called autotrophic growth. It is also possible to grow certain types of algae without light, these types of algae consume sugars (such as glucose). This is known as heterotrophic growth. Temperature The water must be in a temperature range that will support the specific algal species being grown mostly between 15˚C and 35˚C. Light and mixing In a typical algal-cultivation system, such as an open pond, light only penetrates the top of the water, though this depends on the algae density. As the algae grow and multiply, the culture becomes so dense that it blocks light from reaching deeper into the water. Direct sunlight is too strong for most algae, which can use only about the amount of light they receive from direct sunlight; however, exposing an algae culture to direct sunlight (rather than shading it) is often the best course for strong growth, as the algae underneath the surface is able to utilize more of the less intense light created from the shade of the algae above. To use deeper ponds, growers agitate the water, circulating the algae so that it does not remain on the surface. Paddle wheels can stir the water and compressed air coming from the bottom lifts algae from the lower regions. Agitation also helps prevent over-exposure to the sun. Another means of supplying light is to place the light in the system. Glow plates made from sheets of plastic or glass and placed within the tank offer precise control over light intensity, and distribute it more evenly. They are seldom used, however, due to high cost. Odor and oxygen The odor associated with bogs, swamps, and other stagnant waters can be due to oxygen depletion caused by the decay of deceased algal blooms. Under anoxic conditions, the bacteria inhabiting algae cultures break down the organic material and produce hydrogen sulfide and ammonia, which causes the odor. This hypoxia often results in the death of aquatic animals. In a system where algae is intentionally cultivated, maintained, and harvested, neither eutrophication nor hypoxia are likely to occur. Some living algae and bacteria also produce odorous chemicals, particularly certain cyanobacteria (previously classed as blue-green algae) such as Anabaena. The most well known of these odor-causing chemicals are MIB (2-methylisoborneol) and geosmin. They give a musty or earthy odor that can be quite strong. Eventual death of the cyanobacteria releases additional gas that is trapped in the cells. These chemicals are detectable at very low levels – in the parts per billion range – and are responsible for many "taste and odor" issues in drinking water treatment and distribution. Cyanobacteria can also produce chemical toxins that have been a problem in drinking water. Nutrients Nutrients such as nitrogen (N), phosphorus (P), and potassium (K) serve as fertilizer for algae, and are generally necessary for growth. Silica and iron, as well as several trace elements, may also be considered important marine nutrients as the lack of one can limit the growth of, or productivity in, a given area. Carbon dioxide is also essential; usually an input of CO2 is required for fast-paced algal growth. These elements must be dissolved into the water, in bio-available forms, for algae to grow. Methods Farming of macroalgae Open system cultivation An open system of algae cultivation involves the growth of algae in shallow water streams which could originate from a natural system or artificially prepared. In this system, algae can be cultivated in natural water bodies like lakes, rivers, and in oceans, as well as artificial ponds made up of concrete, plastic, pond liners or variety of materials. The open system of algae cultivation is simple and cost-effective, making it an attractive option for commercial production of algae-based products. Open ponds are highly vulnerable to contamination by other microorganisms, such as other algal species or bacteria. Thus cultivators usually choose closed systems for monocultures. Open systems also do not offer control over temperature and lighting. The growing season is largely dependent on location and, aside from tropical areas, is limited to the warmer months. Open pond systems are cheaper to construct, at the minimum requiring only a trench or pond. Large ponds have the largest production capacities relative to other systems of comparable cost. Also, open pond cultivation can exploit unusual conditions that suit only specific algae. For instance, Dunaliella salina grow in extremely salty water; these unusual media exclude other types of organisms, allowing the growth of pure cultures in open ponds. Open culture can also work if there is a system of harvesting only the desired algae, or if the ponds are frequently re-inoculated before invasive organisms can multiply significantly. The latter approach is frequently employed by Chlorella farmers, as the growth conditions for Chlorella do not exclude competing algae. The former approach can be employed in the case of some chain diatoms since they can be filtered from a stream of water flowing through an outflow pipe. A "pillow case" of a fine mesh cloth is tied over the outflow pipe allowing other algae to escape. The chain diatoms are held in the bag and feed shrimp larvae (in Eastern hatcheries) and inoculate new tanks or ponds. Enclosing a pond with a transparent or translucent barrier effectively turns it into a greenhouse. This solves many of the problems associated with an open system. It allows more species to be grown, it allows the species that are being grown to stay dominant, and it extends the growing season – if heated, the pond can produce year round. Open race way ponds were used for removal of lead using live Spirulina (Arthospira) sp. Water lagoons A lagoon is a type of aquatic ecosystem that is characterized by a shallow body of water that is separated from the open ocean by natural barriers such as sandbars, barrier islands, or coral reefs. The Australian company Cognis Australia is a well-known company that specializes in producing β-carotene from Dunaliella salina harvested from hypersaline extensive ponds located in Hutt Lagoon and Whyalla. These ponds are primarily used for wastewater treatment, and the production of D. salina is a secondary benefit. Open sea Open sea cultivation is a method of cultivating seaweed in the open ocean, as well as on a costal line in shallow water. Seaweed farming industry serves commercial needs for various products such as food, feed, pharma chemicals, cosmetics, biofuels, and bio-stimulants. Seaweed extracts act as bio-stimulants, reducing biotic stress and increasing crop production. Additionally, it presents opportunities for creating animal and human nutrition products that can improve immunity and productivity. Open ocean seaweed cultivation is an eco-friendly technology that doesn't require land, fresh water, or chemicals. It also helps mitigate the effects of climate change by sequestering CO2. Open sea cultivation method involves the use of rafts or ropes anchored in the ocean, where the seaweed grows attached to them. This method is widely used for commercial seaweed farming, as it allows for large-scale production and harvesting. The process of open sea cultivation of seaweed involves several steps. First, a suitable site in the ocean is identified, based on factors such as water depth, temperature, salinity, and nutrient availability. Once a site is chosen, ropes or rafts are anchored in the water, and the seed pieces of seaweed are attached to them using specialized equipment. The seaweed is then left to grow for several months, during which it absorbs nutrients from the water and sunlight through photosynthesis. Raceway ponds Raceway-type ponds and lakes are open to the elements. They are one of the most common and economic methods of large-scale algae cultivation, and offer several advantages over other cultivation methods. An open raceway pond is a shallow, rectangular-shaped pond used for the cultivation of algae. Because it is designed to circulate water in a continuous loop or raceway, allowing algae to grow in a controlled environment. Open system is a low-cost method of algae cultivation, and it is relatively easy to construct and maintain. The pond is typically lined with a synthetic material, such as polyethylene (HDPE) or polyvinyl chloride, to prevent the loss of water and nutrients. The pond is also equipped with paddlewheels or other types of mechanical devices to provide mixing and aeration. HRAPs High-Rate Algal Ponds (HRAPs) are a type of open algae cultivation system that has gained popularity in recent years due to their efficiency and low cost of operation. HRAPs are shallow ponds, typically between 0.1 to 0.4 meters deep, that are used for the cultivation of algae. The ponds are equipped with a paddlewheel or other type of mechanical agitation system that provides mixing and aeration, which promotes algae growth. HRAP system is also recommended in wastewater treatment using algae. Photobioreactors Algae can also be grown in a photobioreactor (PBR). A PBR is a bioreactor which incorporates a light source. Virtually any translucent container could be called a PBR; however, the term is more commonly used to define a closed system, as opposed to an open tank or pond. Because PBR systems are closed, the cultivator must provide all nutrients, including . A PBR can operate in "batch mode", which involves restocking the reactor after each harvest, but it is also possible to grow and harvest continuously. Continuous operation requires precise control of all elements to prevent immediate collapse. The grower provides sterilized water, nutrients, air, and carbon dioxide at the correct rates. This allows the reactor to operate for long periods. An advantage is that algae that grows in the "log phase" is generally of higher nutrient content than old "senescent" algae. Algal culture is the culturing of algae in ponds or other resources. Maximum productivity occurs when the "exchange rate" (time to exchange one volume of liquid) is equal to the "doubling time" (in mass or volume) of the algae. PBRs can hold the culture in suspension, or they can provide a substrate on which the culture can form a biofilm. Biofilm-based PBRs have the advantage that they can produce far higher yields for a given water volume, but they can suffer from problems with cells separating from the substrate due to the water flow required to transport gases and nutrients to the culture. Flat panel PBRs Flat panel PBRs consist of a series of flat, transparent panels that are stacked on top of each other, creating a thin layer of liquid between them. Algae are grown in this thin layer of liquid, which is continuously circulated to promote mixing and prevent stagnation. The panels are typically made of glass or plastic and can be arranged in various configurations to optimize light exposure. Flat panel PBRs are generally used for low-to-medium density cultivation and are well-suited for species that require lower light intensity and maximum surface area for optimum light exposure. The temperature control in Flat panel PBR system is carried out by cooling down the culture in reservoir chamber using chilled water jacket as well as by sprinkling cold water on the flat panel surface. Tubular PBRs Tubular PBRs consist of long, transparent tubes that are either vertically or horizontally oriented. Algae are grown inside the tubes, which are typically made of glass or plastic. The tubes are arranged in a helical or serpentine pattern to increase surface area for light exposure. The tubing can be either continuously or intermittently circulated to promote mixing and prevent stagnation. Tubular PBRs are generally used for high-density cultivation and are well-suited for species that require high light intensity. The temperature control in tubular PBR is a difficult task which is generally achieved by external sprinkling of deionized water which allow cooling of the tubes and subsequently reduces the temperature of culture circulating inside the tubes. Biofilm PBRs Biofilm PBRs include packed bed and porous substrate PBRs. Packed bed PBRs can be different shapes, including flat plate or tubular. In Porous Substrate Bioreactors (PSBRs), the biofilm is exposed directly to the air and receives its water and nutrients by capillary action through the substrate itself. This avoids problems with cells becoming suspended because there is no water flow across the biofilm surface. The culture could become contaminated by airborne organisms, but defending against other organisms is one of the functions of a biofilm. Plastic bag PBRs V-shaped plastic bags are commonly used in closed systems of algae cultivation for several reasons. These bags are made from high-density polyethylene (HDPE) and are designed to hold algae cultures in a closed environment, providing an ideal environment for algae growth. V-shaped plastic bags are effective for growing a variety of algae species, including Chlorella, Spirulina, and Nannochloropsis. The growth rate and biomass yield of Chlorella vulgaris in V-shaped plastic bags was found to be higher than any other shaped plastic bags. Different designs of plastic bags based PBR is developed from sealing the plastic bags at different places that generated, flat bottom hanging plastic bags, V-shaped hanging plastic bags, horizontally laying plastic bags that serves kind of flat PBR system, etc. Many plastic bag-based designs are proposed but few are utilized on commercial scale due to their productivities. Operation of plastic bags is tedious as they need to be replaced after every use to maintain the sterility, which is a laborious task for large scale facility. Harvesting Algae can be harvested using microscreens, by centrifugation, by flocculation and by froth flotation. Interrupting the carbon dioxide supply can cause algae to flocculate on its own, which is called "autoflocculation". "Chitosan", a commercial flocculant, more commonly used for water purification, is far more expensive. The powdered shells of crustaceans are processed to acquire chitin, a polysaccharide found in the shells, from which chitosan is derived via deacetylation. Water that is more brackish, or saline requires larger amounts of flocculant. Flocculation is often too expensive for large operations. Alum and ferric chloride are used as chemical flocculants. In froth flotation, the cultivator aerates the water into a froth, and then skims the algae from the top. Ultrasound and other harvesting methods are currently under development. Oil extraction Algae oils have a variety of commercial and industrial uses, and are extracted through a variety of methods. Estimates of the cost to extract oil from microalgae vary, but are likely to be around three times higher than that of extracting palm oil. Physical extraction In the first step of extraction, the oil must be separated from the rest of the algae. The simplest method is mechanical crushing. When algae is dried it retains its oil content, which then can be "pressed" out with an oil press. Different strains of algae warrant different methods of oil pressing, including the use of screw, expeller and piston. Many commercial manufacturers of vegetable oil use a combination of mechanical pressing and chemical solvents in extracting oil. This use is often also adopted for algal oil extraction. Osmotic shock is a sudden reduction in osmotic pressure, this can cause cells in a solution to rupture. Osmotic shock is sometimes used to release cellular components, such as oil. Ultrasonic extraction, a branch of sonochemistry, can greatly accelerate extraction processes. Using an ultrasonic reactor, ultrasonic waves are used to create cavitation bubbles in a solvent material. When these bubbles collapse near the cell walls, the resulting shock waves and liquid jets cause those cells walls to break and release their contents into a solvent. Ultrasonication can enhance basic enzymatic extraction. Chemical extraction Chemical solvents are often used in the extraction of the oils. The downside to using solvents for oil extraction are the dangers involved in working with the chemicals. Care must be taken to avoid exposure to vapors and skin contact, either of which can cause serious health damage. Chemical solvents also present an explosion hazard. A common choice of chemical solvent is hexane, which is widely used in the food industry and is relatively inexpensive. Benzene and ether can also separate oil. Benzene is classified as a carcinogen. Another method of chemical solvent extraction is Soxhlet extraction. In this method, oils from the algae are extracted through repeated washing, or percolation, with an organic solvent such as hexane or petroleum ether, under reflux in a special glassware. The value of this technique is that the solvent is reused for each cycle. Enzymatic extraction uses enzymes to degrade the cell walls with water acting as the solvent. This makes fractionation of the oil much easier. The costs of this extraction process are estimated to be much greater than hexane extraction. Supercritical CO2 can also be used as a solvent. In this method, CO2 is liquefied under pressure and heated to the point that it becomes supercritical (having properties of both a liquid and a gas), allowing it to act as a solvent. Other methods are still being developed, including ones to extract specific types of oils, such as those with a high production of long-chain highly unsaturated fatty acids. Algal culture collections Specific algal strains can be acquired from algal culture collections, with over 500 culture collections registered with the World Federation for Culture Collections.
Technology
Aquaculture
null
4174880
https://en.wikipedia.org/wiki/Goliathus
Goliathus
The Goliath beetles (named after the biblical giant Goliath) are any of the six species in the genus Goliathus. Goliath beetles are among the largest insects on Earth, if measured in terms of size, bulk and weight. They are members of subfamily Cetoniinae, within the family Scarabaeidae. Goliath beetles can be found in many of Africa's tropical forests, where they feed primarily on tree sap and fruit. Little appears to be known of the larval cycle in the wild, but in captivity, Goliathus beetles have been successfully reared from egg to adult using protein-rich foods, such as commercial cat and dog food. Goliath beetles measure from for males and for females, as adults, and can reach weights of up to in the larval stage, though the adults are only about half this weight. The females range from a dark chestnut brown to silky white, but the males are normally brown/white/black or black/white. Goliath beetles, while not currently evaluated on the IUCN Red List, are facing growing conservation challenges across their African range due to habitat loss, over-collection for the international pet trade, and the potential impacts of climate change. Species There are six species of Goliath beetles, with several different subspecies and forms only partially described: Goliathus albosignatus Boheman, 1857 Goliathus cacicus (Olivier, 1789) Goliathus goliatus (Linnaeus, 1771) Goliathus kolbei (Kraatz, 1895) Goliathus orientalis Moser, 1909 Goliathus regius Klug, 1835 Life cycle Goliathus larvae are somewhat unusual among cetoniine scarabs in that they have a greater need for high-protein foods than do those of most other genera. Pellets of dry or soft dog or cat food (buried in the rearing substrate on a regular schedule) provide a suitable diet for Goliathus larvae in captivity. However, a substrate of somewhat moistened, decayed leaves and wood should still be provided in order to create a suitable medium for larval growth. The young stage larvae (1st instar) will eat some of this material. Even under optimum conditions, the larvae take a number of months to mature fully because of the great size they attain. They are capable of growing up to in length and reaching weights in excess of . When maximum size is reached, the larva constructs a rather thin-walled, hardened cell of sandy soil in which it will undergo pupation and metamorphose to the adult state. Once building of this cocoon is completed, the larva transforms to the pupal stage, which is an intermediate phase between the larval and adult stages. During the pupal duration, the insect's tissues are broken down and re-organized into the form of the adult beetle. Once metamorphosis is complete, the insect sheds its pupal skin and undergoes a period of hibernation as an adult beetle until the dry season ends. When the rains begin, the beetle breaks open its cocoon, locates a mate, and the entire life cycle starts over again. The adult beetles feed on materials rich in sugar, especially tree sap and fruit. Under captive conditions, adults can sometimes live for about a year after emerging from their pupal cells. Longevity in the wild is likely to be shorter on average due to factors such as predators and weather. The adult phase concentrates solely on reproduction, and once this function is performed, the time of the adult beetle is limited, as is true for the vast majority of other insect species. Description The bulky bodies of Goliath beetles are composed of a thick and hardened exoskeleton, which protects their organs and hindwings. Like most beetles, they possess reinforced forewings (called elytra) that act as protective covers for their hindwings and abdomen. Only the hindwings (which are large and membranous) are actually used for flying, while the elytra are kept completely closed; flying with closed elytra is universal among cetoniine scarabs but rare in other beetles. When not in use, the wings are kept completely folded beneath the elytra. Each of the beetle's legs ends in a pair of sharp claws, which provide a strong grip used for climbing on tree trunks and branches. Males have a Y-shaped horn on the head, which is used as a pry bar in battles with other males over feeding sites or mates. Females lack horns and instead have a wedge-shaped head that assists in burrowing when they lay eggs. In addition to their massive size, Goliath beetles are strikingly patterned; prominent markings common to all of the Goliathus species are the sharply contrasting black vertical stripes on the pronotum (thoracic shield), while the various species may be most reliably distinguished based on their distinctive mix of elytral colors and patterns.
Biology and health sciences
Beetles (Coleoptera)
Animals
4174969
https://en.wikipedia.org/wiki/Castorocauda
Castorocauda
Castorocauda is an extinct, semi-aquatic, superficially otter-like genus of docodont mammaliaforms with one species, C. lutrasimilis. It is part of the Yanliao Biota, found in the Daohugou Beds of Inner Mongolia, China dating to the Middle to Late Jurassic. It was part of an explosive Middle Jurassic radiation of Mammaliaformes moving into diverse habitats and niches. Its discovery in 2006, along with the discovery of other unusual mammaliaforms, disproves the previous hypothesis of Mammaliaformes remaining evolutionarily stagnant until the extinction of the non-avian dinosaurs at the end of the Mesozoic. Weighing an estimated , Castorocauda is the largest known Jurassic mammaliaform. It is the earliest known mammaliaform with aquatic adaptations or a fur pelt. It was also adapted for digging, and its teeth are similar to those of seals and Eocene whales, collectively suggesting it behaved similarly to the modern-day platypus and river otters and ate primarily fish. It lived in a wet, seasonal, cool temperate environment – which possibly had an average temperature not exceeding – alongside salamanders, pterosaurs, birdlike dinosaurs, and other mammaliaforms. Discovery and etymology The holotype specimen, JZMP04117, was discovered in the Daohugou Beds of the Jiulongshan Formation in the Inner Mongolia region of China, which dates to about 159–164 million years ago (mya) in the Middle to Late Jurassic. It comprises a partial skeleton including an incomplete skull but well-preserved lower jaws, most of the ribs, the limbs (save for the right hind leg), the pelvis and the tail. The remains are so well preserved that there are elements of its soft anatomy and hair. The genus name Castorocauda derives from Latin Castor "beaver" and cauda "tail", in reference to its presumed beaver-like tail. The species name lutrasimilis derives from Latin lutra "otter" and similis "similar", because some aspects of its teeth and vertebrae are similar to modern otters. Description Castorocauda was the largest of known docodonts. The preserved length from head to tail is , but in life it was much larger. Based on the dimensions of the platypus, the lower weight limit was estimated to be in life, and the upper , making it the largest known Jurassic mammaliaform, surpassing the previous record of for Sinoconodon. It had specialized teeth that curve backwards to help it hold onto slippery fish, as seen in modern seals and also ancestral whales. The first two molars have cusps in a straight row, and interlocked during biting. This feature is similar to the ancestral condition in Mammaliaformes (such as in triconodonts) but is a derived character (it was specially evolved instead of inherited) in Castorocauda. The lower jaw contained 4 incisors, 1 canine, 5 premolars and 6 molars. The forelimbs of Castorocauda are very similar to those of the modern platypus: the humerus widens towards the elbow; the forearm bones have hypertrophied (large) epicondyles (where the joint attaches); the radial and ulnar joints are widely separated; the ulna has a massive olecranon (where it attaches to the elbow); the wrist bones are block-like; and the finger bones are robust. Docodontans were likely burrowing creatures and had a sprawling gait, and Castorocauda may have also used its arms for rowing, similar to the platypus. There are traces of soft tissue between the toes, suggesting webbed hind feet. It likely also had claws, and the holotype shows a spur on the hind ankle, which, in male platypuses, is venomous. Castorocauda likely had 14 thoracic, 7 lumbar, 3 sacral and 25 tail vertebrae. Like some mammals, it had plated ribs, and the ribs extended into the lumbar vertebrae. Plating occurred on the proximal margins (the part of the rib closest to the vertebra), and, in Castorocauda, they may have served to increase the insertion area (the part of a muscle which moves while contracting) of the iliocostalis muscle on the back, which would interlock nearby ribs and better support the torso of the animal. Plated ribs are present in arboreal (tree-dwelling) and fossorial (burrowing) xenarthrans (sloths, anteaters, armadillos and relatives). The tail vertebrae are flattened dorsoventrally (shortened vertically and widened more horizontally); and each centrum has two pairs of transverse processes (which jut out diagonally from the centrum) on the headward side and another on the tailward side, making the centrum appear somewhat like the letter H from the top-view looking down. This tail anatomy is similar to beavers and otters, which use their tails for paddling and propulsion. Fur was preserved on the holotype, and it is the earliest known pelt; this showed that fur, with its many uses including heat retention and as a tactile sense, was an ancestral trait of mammals. Mammals preserved with fur from the Chinese Yixian Formation show little hair on the tail, whereas the fur outline preserved on the Castorocauda tail was 50% wider than the pelvis. The first quarter is covered by guard hairs, the middle half by scales and little hair cover and the last quarter by scales with some guard hair. Beavers have a very similar tail. Evidence of fur and assumed heightened tactile senses indicate it had a well-developed neocortex, a portion of the brain unique to mammals which, among other things, controls sensory perception. Taxonomy Castorocauda is a member of the order Docodonta, an extinct group of mammaliaforms. Mammaliaformes includes mammal-like creatures and the crown mammals (all descendants, living or extinct, of the last common ancestor of all living mammals). Docodonts are not crown mammals. When Castorocauda was first described in 2006, it was thought to be most closely related to the European Krusatodon and Simpsonodon. In a 2010 review of docodonts, Docodonta was split into Docodontidae, Simpsonodontidae and Tegotheriidae, with Castorocauda considered incertae sedis with indeterminate affinities. Simpsonodontidae is now considered to be paraphyletic and thus invalid, and Castorocauda appears to have been most closely related to Dsungarodon, which came from the Junggar Basin of China and probably ate plants and soft invertebrates. Castorocauda is part of a Middle Jurassic mammaliaform diversification event, wherein mammaliaforms radiated into a wide array of niches and evolved several modern traits, such as more modern mammalian teeth and middle ear bones. It was previously thought that mammals were small and ground-dwelling until the Cretaceous–Paleogene boundary (K–Pg boundary) when dinosaurs went extinct. The discovery of Castorocauda, and evidence for an explosive diversification in the Middle Jurassic – such as the appearance of eutriconodontans, multituberculates, australosphenidans, metatherians and eutherians, among others – disproves this notion. This may have been caused by the breakup of Pangaea, which started in the Early to Middle Jurassic and diversified habitats and niches, or modern traits that had been slowly accumulating since mammaliaforms evolved until reaching a critical point which allowed for a massive expansion into different habitats. Paleoecology Castorocauda is the earliest known aquatic mammaliaform, pushing back the first appearance of mammaliaform aquatic adaptations by over 100 million years. The teeth interlocked while biting, suggesting that they were strictly used for gripping; the recurved molars were likely used to hold slippery prey; and the teeth shapes are convergent with seals and Eocene whales, suggesting a similar ecological standing. Based on these, its adaptations to swimming and digging and its large size, Castorocauda was probably comparable to the modern day platypus, river otters and similar semi-aquatic mammals in ecology and fed primarily on fish (piscivory). The Daohugou Beds also include several salamanders, numerous pterosaur species (of which many likely were piscivorous), several insects, the clam shrimp Euestheria and some birdlike dinosaurs. No fish are known from specifically the Daohugou Beds, but the related Linglongta locality contains undetermined ptycholepiformes. Other mammals include the flying-squirrel-like Volaticotherium, the burrowing Pseudotribos, the oldest known eutherian Juramaia. the rat-like Megaconus and the gliding Arboroharamiya. The plant life of the Tiaojishan Formation was dominated by cycadeoids (mainly Nilssonia and Ctenis), leptosporangiate ferns and ginkgophytes and has pollen remains predominantly from pteridophytes and gymnosperms, which indicate a cool temperate and wet climate with distinct wet and dry seasons, possibly with an annual temperature of below .
Biology and health sciences
Stem-mammals
Animals
8763148
https://en.wikipedia.org/wiki/Urban%20horticulture
Urban horticulture
Urban horticulture is the science and study of the growing plants in an urban environment. It focuses on the functional use of horticulture so as to maintain and improve the surrounding urban area. Urban horticulture has seen an increase in attention with the global trend of urbanization and works to study the harvest, aesthetic, architectural, recreational and psychological purposes and effects of plants in urban environments. History Horticulture and the integration of nature into human civilization has been a major part in the establishment of cities. During the Neolithic Revolution cities would often be built with market gardens and farms as their trading centers. Studies in urban horticulture rapidly increased with the major growth of cities during the Industrial Revolution. These insights led to the field being dispersed to farmers in the hinterlands. For centuries, the built environment such as homes, public buildings, etc. were integrated with cultivation in the form of gardens, farms, and grazing lands, Kitchen gardens, farms, common grazing land, etc. Therefore, horticulture was a regular part of everyday life in the city. With the Industrial Revolution and the related increasing populations rapidly changed the landscape and replaced green spaces with brick and asphalt. After the nineteenth century, Horticulture was then selectively restored in some urban spaces as a response to the unhealthy conditions of factory neighborhoods and cities began seeing the development of parks. Post World War II trends Early urban horticulture movements majorly served the purposes of short term welfare during recession periods, philanthropic charity to uplift "the masses" or patriotic relief. The tradition of urban horticulture mostly declined after World War II as suburbs became the focus of residential and commercial growth. Most of the economically stable population moved out of the cities into the suburbs, leaving only slums and ghettos at the city centers. However, there were a few exceptions of garden projects initiated by public housing authorities in the 1950s and 1960s for the purpose of beautification and tenant pride. But for the most part as businesses also left the metropolitan areas, it generated wastelands and areas of segregated poverty. Inevitably the disinvestment of major city centers, specifically in America, resulted in the drastic increase of vacant lots. Existing buildings became uninhabitable, houses were abandoned and even productive industrial land became vacant. Modern community gardening, urban agriculture, and food security movements were a form of response to battle the above problems at a local level. In fact other movements at that time such as the peace, environmental, women's, civil rights, and "back-to-the-city" movements of the 1960s and 1970s and the environmental justice movement of the 1980s and 1990s saw opportunity in these vacant lands as a way of reviving communities through school and community gardens, farmers' markets, and urban agriculture. Modern community garden movement Things have taken a turn in the twenty-first century as people are recognizing the need for local community gardens and green spaces. It is not the concept but the purposes that are new. The main goals of this movement include cleaning up neighborhoods, pushing out drug dealing that occurs at empty lots, growing and preserving food for consumption, restoring nature to industrial areas, and bringing the farming traditions to urban cities. Essentially community gardening is seen as way of creating a relationship between people and a place through social and physical engagement. Most urban gardens are created on vacant land that vary in size and are generally gardened as individual plots by community members. Such areas can support social, cultural, and artistic events and contribute to the rebuilding of local community spirit. The modern community garden movement is initiated by neighborhoods along with the support of the governments and non-profit organizations. Some gardens are linked to public housing projects, schools through garden-based learning programs, churches and social agencies and some even employ those who are incarcerated. Community gardens which are now a large part of the urban horticulture movement are different from the earlier periods of grand park development in that the latter only served to free the people from the industrialism. In addition a community garden is more beneficial and engaging than a mere lawn or park and serves as a valuable access to nature where wilderness is unavailable. This movement helped create and sustain relationships between city dwellers and the soil and contributed to a different kind of urban environmentalism that did not have any characteristics of reform charity. Despite that it has been 30 years since the first community gardens in the US, there is no concrete analysis of current urban gardens and their organizations. The American Community Gardening Association (ACGA) has estimations that show that municipal governments and non-profit organizations operate gardening programs in about 250 cities and towns, although the staff of this organization admits that this number could in reality be twice as large. In 1994 survey, the National Gardening Association found that 6.7 million households that weren't involved in gardening would be interested in doing so if there was a plot nearby. A more recent survey showed that more gardens are being created in cities as opposed to being lost to economic development. Today urban horticulture has several components that include more than just community gardens, such as market gardens, small farms and farmers' markets and is an important aspect of community development. Another result of urban horticulture is the food security movement where locally grown food is given precedence through several projects and programs, thus providing low-cost and nutritious food. Urban community gardens and the food security movement was a response to the problems of industrial agriculture and to solve its related problems of price inflation, lack of supermarkets, food scarcity, etc. Benefits Horticulture by itself is a practical and applied science, which means it can have a significance in our everyday lives. As community gardens cannot actually compete with market-based land uses, it is essential to find other ways to understand their various benefits such as their contribution to social, human, and financial well-being. Frederick Law Olmsted, the designer of New York City's Central Park observed that the trees, meadows, ponds and wildlife tranquilize the stresses of city life. According to various studies over the years, nature has a very positive impact over human health and even more so in an emotional and psychological sense. Trees, grass, and flower gardens, due to their presence as well as visibility, increase people's life satisfaction by reducing fatigue and irritation and restoring a sense of calm. In fact Honeyman tested the restorative value of nature scenes in urban settings and discovered that vegetation in an urban setting produced more mental restoration as opposed to areas without vegetation. In addition, areas with only nature did not have as much of a positive psychological impact as did the combination of urban areas and nature. One of the obvious health benefits of gardening is the increased intake of fruits and vegetables, but the act of gardening itself provides an additional major health benefit. Gardening is a low-impact exercise, which when added into daily activities, can help reduce weight, lower stress, and improve overall health. A recent study showed a reduced body mass index and lower weight in community gardeners compared with their non-gardening counterparts. The study showed men who gardened had a body mass index 2.36 lower and were 62% less likely to be overweight than their neighbors, while women were 46% less likely to be overweight with a body mass index 1.88 lower than their neighbors. Access to urban gardens can improve health through nutritious, edible plantings, as well by getting people outside and promoting more activity in their environments. Gardening programs in inner-city schools have become increasingly popular as a way to teach children not only about healthy eating habits, but also to encourage students to become active learners. Besides getting students outside and moving, and encouraging an active lifestyle, children also learn leadership, teamwork, communication and collaboration skills, in addition to critical and creative thinking skills. Gardening in schools will enable children to share with their families the health and nutrition benefits of eating fresh fruits and vegetables. Because weather and soil conditions are in a state of constant change, students learn to adapt their thinking and creatively problem solve, depending on the situations that arise. Students also learn to interact and communicate with a diverse population of people, from other students to adult volunteers. These programs benefit students' health and enable them to be active contributors in the world around them. Gardens and other green spaces also increase social activity and help in creating a sense of place, apart from their various other purposes such as enhancing the community by mediating environmental factors. There is also a huge disparity in the availability of sources that provide nutritious and affordable foods especially around urban centers which have problems of poverty, lack of public transport and abandonment by supermarkets. Therefore, inner city community gardens can be a valuable source of nutrition at an affordable cost in the most easily accessible way. In order to understand and thereby maximize the benefits of urban horticulture, it is essential to document the effects of horticulture activities and quantify the benefits so that governments and private industries can make the appropriate changes. Horticulturists have always been involved in the botanical and physical aspects of horticulture but an involvement in its social and emotional factors would be highly beneficial to communities, cities and to the field of horticulture and its profession. Based on this, in the 1970s, the International Society for Horticultural Science recognized this need for research on the functional use of plants in an urban setting along with the need of improved communication between scientists in this field of research and people who utilize plants. The Commission for Urban Horticulture was established in 1982 which deals with plants grown in urban areas, management techniques, the functional use of these plants as well the shortcomings of the current lack of knowledge regarding this field. The establishment of such a commission is an important indicator that this topic has reached a level of international recognition. Economic benefits There are many different economic benefits from gardening from saving money purchasing food and even on the utility bills. Developing countries can spend up to 60–80 percent of income on buying food alone. In Barbara Lake, Milfront Taciano and Gavin Michaels Journal of Psychology article "The Relative Influence of Psycho-Social Factors on Urban Gardening", they say that while people are saving money on buying food, having roof top gardens are also becoming popular. Having green roofs can reduce the cost of heating in the winter and help stay cool in the summer. Green roofs also can lower the cost of roof replacement. While green roofs are an addition to urban horticulture people are eating healthy while also improving the value of their property. Other benefits include increased employment from non-commercial jobs where producers include reductions on the cost of food. Production practices Crops are grown in flowerpots, growbags, small gardens or larger fields, using traditional or high-tech and innovative practices. Some new techniques that have been adapted to the urban situation and tackle the main city restrictions are also documented. These include horticultural production on built-up land using various types of substrates (e.g. roof top, organic production and hydroponic/aeroponic production). The adaptation of vertical farming methods like the use of trellises or tomato cages are popular options for urban horticulture. Because of this, it is also known as roof-top vegetable gardening/horticulture and container vegetable gardening/horticulture. Urban horticulture around the world Urban and peri-urban horticulture in Africa A report of the United Nations Food and Agriculture Organization, Growing greener cities in Africa, states that market gardening – i.e. irrigated, commercial production of fruit and vegetables in areas designated for the purpose, or in other urban open spaces – is the single most important source of locally grown, fresh produce in 10 out of 27 African countries for which data are available. Market gardening produces most of all the leafy vegetables consumed in Accra, Dakar, Bangui, Brazzaville, Ibadan, Kinshasa and Yaoundé, cities that, between them, have a total population of 22.5 million. Market gardens provide around half of the leafy vegetable supply in Addis Ababa, Bissau and Libreville. The report says that in most of urban Africa, market gardening is an informal and often illegal activity, which has grown with little official recognition, regulation or support. Most gardeners have no formal title to their land, and many lose it overnight. Land suitable for horticulture is being taken for housing, industry and infrastructure. To maximize earnings from insecure livelihoods, many gardeners are overusing pesticide and urban waste water. Urban horticulture in Latin America Starting in the 1980s, Latin American governments came to see urban agroecology and horticulture not only as an agricultural practice but as a revolutionary tool aimed at restructuring society along more equitable and sustainable lines. In some cases, urban horticulture fell under the larger umbrella of 'urban agriculture' and was utilized as a way for governments to push for better agricultural policies that helped citizens. In Latin America, institutional support for urban farming practices came through the social reforms of the 1960s and 1970s, where there was a significant push for sustainable and equitable agricultural practices as a response to the failures of the Green Revolution. The Green Revolution as well as globalism prompted South American governments to invest less in agriculture and incentivized higher expenditure on food imports, often due to international economic pressure. These policies hurt national food sovereignty and small-scale farmers, and further exacerbated existing socio-economic inequalities.
Technology
Horticulture
null
6781
https://en.wikipedia.org/wiki/Cytosol
Cytosol
The cytosol, also known as cytoplasmic matrix or groundplasm, is one of the liquids found inside cells (intracellular fluid (ICF)). It is separated into compartments by membranes. For example, the mitochondrial matrix separates the mitochondrion into many compartments. In the eukaryotic cell, the cytosol is surrounded by the cell membrane and is part of the cytoplasm, which also comprises the mitochondria, plastids, and other organelles (but not their internal fluids and structures); the cell nucleus is separate. The cytosol is thus a liquid matrix around the organelles. In prokaryotes, most of the chemical reactions of metabolism take place in the cytosol, while a few take place in membranes or in the periplasmic space. In eukaryotes, while many metabolic pathways still occur in the cytosol, others take place within organelles. The cytosol is a complex mixture of substances dissolved in water. Although water forms the large majority of the cytosol, its structure and properties within cells is not well understood. The concentrations of ions such as sodium and potassium in the cytosol are different to those in the extracellular fluid; these differences in ion levels are important in processes such as osmoregulation, cell signaling, and the generation of action potentials in excitable cells such as endocrine, nerve and muscle cells. The cytosol also contains large amounts of macromolecules, which can alter how molecules behave, through macromolecular crowding. Although it was once thought to be a simple solution of molecules, the cytosol has multiple levels of organization. These include concentration gradients of small molecules such as calcium, large complexes of enzymes that act together and take part in metabolic pathways, and protein complexes such as proteasomes and carboxysomes that enclose and separate parts of the cytosol. Definition The term "cytosol" was first introduced in 1965 by H. A. Lardy, and initially referred to the liquid that was produced by breaking cells apart and pelleting all the insoluble components by ultracentrifugation. Such a soluble cell extract is not identical to the soluble part of the cell cytoplasm and is usually called a cytoplasmic fraction. The term cytosol is now used to refer to the liquid phase of the cytoplasm in an intact cell. This excludes any part of the cytoplasm that is contained within organelles. Due to the possibility of confusion between the use of the word "cytosol" to refer to both extracts of cells and the soluble part of the cytoplasm in intact cells, the phrase "aqueous cytoplasm" has been used to describe the liquid contents of the cytoplasm of living cells. Prior to this, other terms, including hyaloplasm, were used for the cell fluid, not always synonymously, as its nature was not well understood (see protoplasm). Properties and composition The proportion of cell volume that is cytosol varies: for example while this compartment forms the bulk of cell structure in bacteria, in plant cells the main compartment is the large central vacuole. The cytosol consists mostly of water, dissolved ions, small molecules, and large water-soluble molecules (such as proteins). The majority of these non-protein molecules have a molecular mass of less than 300 Da. This mixture of small molecules is extraordinarily complex, as the variety of molecules that are involved in metabolism (the metabolites) is immense. For example, up to 200,000 different small molecules might be made in plants, although not all these will be present in the same species, or in a single cell. Estimates of the number of metabolites in single cells such as E. coli and baker's yeast predict that under 1,000 are made. Water Most of the cytosol is water, which makes up about 70% of the total volume of a typical cell. The pH of the intracellular fluid is 7.4. while mouse cell cytosolic pH ranges between 7.0 and 7.4, and is usually higher if a cell is growing. The viscosity of cytoplasm is roughly the same as pure water, although diffusion of small molecules through this liquid is about fourfold slower than in pure water, due mostly to collisions with the large numbers of macromolecules in the cytosol. Studies in the brine shrimp have examined how water affects cell functions; these saw that a 20% reduction in the amount of water in a cell inhibits metabolism, with metabolism decreasing progressively as the cell dries out and all metabolic activity halting when the water level reaches 70% below normal. Although water is vital for life, the structure of this water in the cytosol is not well understood, mostly because methods such as nuclear magnetic resonance spectroscopy only give information on the average structure of water, and cannot measure local variations at the microscopic scale. Even the structure of pure water is poorly understood, due to the ability of water to form structures such as water clusters through hydrogen bonds. The classic view of water in cells is that about 5% of this water is strongly bound in by solutes or macromolecules as water of solvation, while the majority has the same structure as pure water. This water of solvation is not active in osmosis and may have different solvent properties, so that some dissolved molecules are excluded, while others become concentrated. However, others argue that the effects of the high concentrations of macromolecules in cells extend throughout the cytosol and that water in cells behaves very differently from the water in dilute solutions. These ideas include the proposal that cells contain zones of low and high-density water, which could have widespread effects on the structures and functions of the other parts of the cell. However, the use of advanced nuclear magnetic resonance methods to directly measure the mobility of water in living cells contradicts this idea, as it suggests that 85% of cell water acts like that pure water, while the remainder is less mobile and probably bound to macromolecules. Ions The concentrations of the other ions in cytosol are quite different from those in extracellular fluid and the cytosol also contains much higher amounts of charged macromolecules such as proteins and nucleic acids than the outside of the cell structure. In contrast to extracellular fluid, cytosol has a high concentration of potassium ions and a low concentration of sodium ions. This difference in ion concentrations is critical for osmoregulation, since if the ion levels were the same inside a cell as outside, water would enter constantly by osmosis - since the levels of macromolecules inside cells are higher than their levels outside. Instead, sodium ions are expelled and potassium ions taken up by the Na⁺/K⁺-ATPase, potassium ions then flow down their concentration gradient through potassium-selection ion channels, this loss of positive charge creates a negative membrane potential. To balance this potential difference, negative chloride ions also exit the cell, through selective chloride channels. The loss of sodium and chloride ions compensates for the osmotic effect of the higher concentration of organic molecules inside the cell. Cells can deal with even larger osmotic changes by accumulating osmoprotectants such as betaines or trehalose in their cytosol. Some of these molecules can allow cells to survive being completely dried out and allow an organism to enter a state of suspended animation called cryptobiosis. In this state the cytosol and osmoprotectants become a glass-like solid that helps stabilize proteins and cell membranes from the damaging effects of desiccation. The low concentration of calcium in the cytosol allows calcium ions to function as a second messenger in calcium signaling. Here, a signal such as a hormone or an action potential opens calcium channel so that calcium floods into the cytosol. This sudden increase in cytosolic calcium activates other signalling molecules, such as calmodulin and protein kinase C. Other ions such as chloride and potassium may also have signaling functions in the cytosol, but these are not well understood. Macromolecules Protein molecules that do not bind to cell membranes or the cytoskeleton are dissolved in the cytosol. The amount of protein in cells is extremely high, and approaches 200 mg/ml, occupying about 20–30% of the volume of the cytosol. However, measuring precisely how much protein is dissolved in cytosol in intact cells is difficult, since some proteins appear to be weakly associated with membranes or organelles in whole cells and are released into solution upon cell lysis. Indeed, in experiments where the plasma membrane of cells were carefully disrupted using saponin, without damaging the other cell membranes, only about one quarter of cell protein was released. These cells were also able to synthesize proteins if given ATP and amino acids, implying that many of the enzymes in cytosol are bound to the cytoskeleton. However, the idea that the majority of the proteins in cells are tightly bound in a network called the microtrabecular lattice is now seen as unlikely. In prokaryotes the cytosol contains the cell's genome, within a structure known as a nucleoid. This is an irregular mass of DNA and associated proteins that control the transcription and replication of the bacterial chromosome and plasmids. In eukaryotes the genome is held within the cell nucleus, which is separated from the cytosol by nuclear pores that block the free diffusion of any molecule larger than about 10 nanometres in diameter. This high concentration of macromolecules in cytosol causes an effect called macromolecular crowding, which is when the effective concentration of other macromolecules is increased, since they have less volume to move in. This crowding effect can produce large changes in both the rates and the position of chemical equilibrium of reactions in the cytosol. It is particularly important in its ability to alter dissociation constants by favoring the association of macromolecules, such as when multiple proteins come together to form protein complexes, or when DNA-binding proteins bind to their targets in the genome. Organization Although the components of the cytosol are not separated into regions by cell membranes, these components do not always mix randomly and several levels of organization can localize specific molecules to defined sites within the cytosol. Concentration gradients Although small molecules diffuse rapidly in the cytosol, concentration gradients can still be produced within this compartment. A well-studied example of these are the "calcium sparks" that are produced for a short period in the region around an open calcium channel. These are about 2 micrometres in diameter and last for only a few milliseconds, although several sparks can merge to form larger gradients, called "calcium waves". Concentration gradients of other small molecules, such as oxygen and adenosine triphosphate may be produced in cells around clusters of mitochondria, although these are less well understood. Protein complexes Proteins can associate to form protein complexes, these often contain a set of proteins with similar functions, such as enzymes that carry out several steps in the same metabolic pathway. This organization can allow substrate channeling, which is when the product of one enzyme is passed directly to the next enzyme in a pathway without being released into solution. Channeling can make a pathway more rapid and efficient than it would be if the enzymes were randomly distributed in the cytosol, and can also prevent the release of unstable reaction intermediates. Although a wide variety of metabolic pathways involve enzymes that are tightly bound to each other, others may involve more loosely associated complexes that are very difficult to study outside the cell. Consequently, the importance of these complexes for metabolism in general remains unclear. Protein compartments Some protein complexes contain a large central cavity that is isolated from the remainder of the cytosol. One example of such an enclosed compartment is the proteasome. Here, a set of subunits form a hollow barrel containing proteases that degrade cytosolic proteins. Since these would be damaging if they mixed freely with the remainder of the cytosol, the barrel is capped by a set of regulatory proteins that recognize proteins with a signal directing them for degradation (a ubiquitin tag) and feed them into the proteolytic cavity. Another large class of protein compartments are bacterial microcompartments, which are made of a protein shell that encapsulates various enzymes. These compartments are typically about 100–200 nanometres across and made of interlocking proteins. A well-understood example is the carboxysome, which contains enzymes involved in carbon fixation such as RuBisCO. Biomolecular condensates Non-membrane bound organelles can form as biomolecular condensates, which arise by clustering, oligomerisation, or polymerisation of macromolecules to drive colloidal phase separation of the cytoplasm or nucleus. Cytoskeletal sieving Although the cytoskeleton is not part of the cytosol, the presence of this network of filaments restricts the diffusion of large particles in the cell. For example, in several studies tracer particles larger than about 25 nanometres (about the size of a ribosome) were excluded from parts of the cytosol around the edges of the cell and next to the nucleus. These "excluding compartments" may contain a much denser meshwork of actin fibres than the remainder of the cytosol. These microdomains could influence the distribution of large structures such as ribosomes and organelles within the cytosol by excluding them from some areas and concentrating them in others. Function The cytosol is the site of multiple cell processes. Examples of these processes include signal transduction from the cell membrane to sites within the cell, such as the cell nucleus, or organelles. This compartment is also the site of many of the processes of cytokinesis, after the breakdown of the nuclear membrane in mitosis. Another major function of cytosol is to transport metabolites from their site of production to where they are used. This is relatively simple for water-soluble molecules, such as amino acids, which can diffuse rapidly through the cytosol. However, hydrophobic molecules, such as fatty acids or sterols, can be transported through the cytosol by specific binding proteins, which shuttle these molecules between cell membranes. Molecules taken into the cell by endocytosis or on their way to be secreted can also be transported through the cytosol inside vesicles, which are small spheres of lipids that are moved along the cytoskeleton by motor proteins. The cytosol is the site of most metabolism in prokaryotes, and a large proportion of the metabolism of eukaryotes. For instance, in mammals about half of the proteins in the cell are localized to the cytosol. The most complete data are available in yeast, where metabolic reconstructions indicate that the majority of both metabolic processes and metabolites occur in the cytosol. Major metabolic pathways that occur in the cytosol in animals are protein biosynthesis, the pentose phosphate pathway, glycolysis and gluconeogenesis. The localization of pathways can be different in other organisms, for instance fatty acid synthesis occurs in chloroplasts in plants and in apicoplasts in apicomplexa.
Biology and health sciences
Cell parts
Biology
6794
https://en.wikipedia.org/wiki/Comet%20Shoemaker%E2%80%93Levy%209
Comet Shoemaker–Levy 9
Comet Shoemaker–Levy 9 (formally designated D/1993 F2) was a comet that broke apart in July 1992 and collided with Jupiter in July 1994, providing the first direct observation of an extraterrestrial collision of Solar System objects. This generated a large amount of coverage in the popular media, and the comet was closely observed by astronomers worldwide. The collision provided new information about Jupiter and highlighted its possible role in reducing space debris in the inner Solar System. The comet was discovered by astronomers Carolyn and Eugene M. Shoemaker, and David Levy in 1993. Shoemaker–Levy 9 (SL9) had been captured by Jupiter and was orbiting the planet at the time. It was located on the night of March 24 in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. It was the first active comet observed to be orbiting a planet, and had probably been captured by Jupiter around 20 to 30 years earlier. Calculations showed that its unusual fragmented form was due to a previous closer approach to Jupiter in July 1992. At that time, the orbit of Shoemaker–Levy 9 passed within Jupiter's Roche limit, and Jupiter's tidal forces had acted to pull the comet apart. The comet was later observed as a series of fragments ranging up to in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately (Jupiter's escape velocity) or . The prominent scars from the impacts were more visible than the Great Red Spot and persisted for many months. Discovery While conducting a program of observations designed to uncover near-Earth objects, the Shoemakers and Levy discovered Comet Shoemaker–Levy 9 on the night of March 24, 1993, in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. The comet was thus a serendipitous discovery, but one that quickly overshadowed the results from their main observing program. Comet Shoemaker–Levy 9 was the ninth periodic comet (a comet whose orbital period is 200 years or less) discovered by the Shoemakers and Levy, thence its name. It was their eleventh comet discovery overall including their discovery of two non-periodic comets, which use a different nomenclature. The discovery was announced in IAU Circular 5725 on March 26, 1993. The discovery image gave the first hint that comet Shoemaker–Levy 9 was an unusual comet, as it appeared to show multiple nuclei in an elongated region about 50 arcseconds long and 10 arcseconds wide. Brian G. Marsden of the Central Bureau for Astronomical Telegrams noted that the comet lay only about 4 degrees from Jupiter as seen from Earth, and that although this could be a line-of-sight effect, its apparent motion in the sky suggested that the comet was physically close to the planet. Comet with a Jovian orbit Orbital studies of the new comet soon revealed that it was orbiting Jupiter rather than the Sun, unlike all other comets known at the time. Its orbit around Jupiter was very loosely bound, with a period of about 2 years and an apoapsis (the point in the orbit farthest from the planet) of . Its orbit around the planet was highly eccentric (e = 0.9986). Tracing back the comet's orbital motion revealed that it had been orbiting Jupiter for some time. It is likely that it was captured from a solar orbit in the early 1970s, although the capture may have occurred as early as the mid-1960s. Several other observers found images of the comet in precovery images obtained before March 24, including Kin Endate from a photograph exposed on March 15, Satoru Otomo on March 17, and a team led by Eleanor Helin from images on March 19. An image of the comet on a Schmidt photographic plate taken on March 19 was identified on March 21 by M. Lindgren, in a project searching for comets near Jupiter. However, as his team were expecting comets to be inactive or at best exhibit a weak dust coma, and SL9 had a peculiar morphology, its true nature was not recognised until the official announcement 5 days later. No precovery images dating back to earlier than March 1993 have been found. Before the comet was captured by Jupiter, it was probably a short-period comet with an aphelion just inside Jupiter's orbit, and a perihelion interior to the asteroid belt. The volume of space within which an object can be said to orbit Jupiter is defined by Jupiter's Hill sphere. When the comet passed Jupiter in the late 1960s or early 1970s, it happened to be near its aphelion, and found itself slightly within Jupiter's Hill sphere. Jupiter's gravity nudged the comet towards it. Because the comet's motion with respect to Jupiter was very small, it fell almost straight toward Jupiter, which is why it ended up on a Jove-centric orbit of very high eccentricity—that is to say, the ellipse was nearly flattened out. The comet had apparently passed extremely close to Jupiter on July 7, 1992, just over above its cloud tops—a smaller distance than Jupiter's radius of , and well within the orbit of Jupiter's innermost moon Metis and the planet's Roche limit, inside which tidal forces are strong enough to disrupt a body held together only by gravity. Although the comet had approached Jupiter closely before, the July 7 encounter seemed to be by far the closest, and the fragmentation of the comet is thought to have occurred at this time. Each fragment of the comet was denoted by a letter of the alphabet, from "fragment A" through to "fragment W", a practice already established from previously observed fragmented comets. More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days. Predictions for the collision The discovery that the comet was likely to collide with Jupiter caused great excitement within the astronomical community and beyond, as astronomers had never before seen two significant Solar System bodies collide. Intense studies of the comet were undertaken, and as its orbit became more accurately established, the possibility of a collision became a certainty. The collision would provide a unique opportunity for scientists to look inside Jupiter's atmosphere, as the collisions were expected to cause eruptions of material from the layers normally hidden beneath the clouds. Astronomers estimated that the visible fragments of SL9 ranged in size from a few hundred metres (around ) to across, suggesting that the original comet may have had a nucleus up to across—somewhat larger than Comet Hyakutake, which became very bright when it passed close to the Earth in 1996. One of the great debates in advance of the impact was whether the effects of the impact of such small bodies would be noticeable from Earth, apart from a flash as they disintegrated like giant meteors. The most optimistic prediction was that large, asymmetric ballistic fireballs would rise above the limb of Jupiter and into sunlight to be visible from Earth. Other suggested effects of the impacts were seismic waves travelling across the planet, an increase in stratospheric haze on the planet due to dust from the impacts, and an increase in the mass of the Jovian ring system. However, given that observing such a collision was completely unprecedented, astronomers were cautious with their predictions of what the event might reveal. Impacts Anticipation grew as the predicted date for the collisions approached, and astronomers trained terrestrial telescopes on Jupiter. Several space observatories did the same, including the Hubble Space Telescope, the ROSAT X-ray-observing satellite, the W. M. Keck Observatory, and the Galileo spacecraft, then on its way to a rendezvous with Jupiter scheduled for 1995. Although the impacts took place on the side of Jupiter hidden from Earth, Galileo, then at a distance of from the planet, was able to see the impacts as they occurred. Jupiter's rapid rotation brought the impact sites into view for terrestrial observers a few minutes after the collisions. Two other space probes made observations at the time of the impact: the Ulysses spacecraft, primarily designed for solar observations, was pointed toward Jupiter from its location away, and the distant Voyager 2 probe, some from Jupiter and on its way out of the Solar System following its encounter with Neptune in 1989, was programmed to look for radio emission in the 1–390 kHz range and make observations with its ultraviolet spectrometer. Astronomer Ian Morison described the impacts as following: The first impact occurred at 20:13 UTC on July 16, 1994, when fragment A of the [comet's] nucleus slammed into Jupiter's southern hemisphere at about . Instruments on Galileo detected a fireball that reached a peak temperature of about , compared to the typical Jovian cloud-top temperature of about . It then expanded and cooled rapidly to about . The plume from the fireball quickly reached a height of over and was observed by the HST. A few minutes after the impact fireball was detected, Galileo measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact. Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact; the spot was visible from Earth. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact. Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over (almost one Earth diameter) across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet. Observations and discoveries Chemical studies Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S2) and carbon disulfide (CS2), the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object. Other molecules detected included ammonia (NH3) and hydrogen sulfide (H2S). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers. As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon were detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much as predicted, meaning that either the water layer thought to exist below the clouds was thinner than predicted, or that the cometary fragments did not penetrate deeply enough. Waves As predicted, the collisions generated enormous waves that swept across Jupiter at speeds of and were observed for over two hours after the largest impacts. The waves were thought to be travelling within a stable layer acting as a waveguide, and some scientists thought the stable layer must lie within the hypothesised tropospheric water cloud. However, other evidence seemed to indicate that the cometary fragments had not reached the water layer, and the waves were instead propagating within the stratosphere. Other observations Radio observations revealed a sharp increase in continuum emission at a wavelength of after the largest impacts, which peaked at 120% of the normal emission from the planet. This was thought to be due to synchrotron radiation, caused by the injection of relativistic electrons—electrons with velocities near the speed of light—into the Jovian magnetosphere by the impacts. About an hour after fragment K entered Jupiter, observers recorded auroral emission near the impact region, as well as at the antipode of the impact site with respect to Jupiter's strong magnetic field. The cause of these emissions was difficult to establish due to a lack of knowledge of Jupiter's internal magnetic field and of the geometry of the impact sites. One possible explanation was that upwardly accelerating shock waves from the impact accelerated charged particles enough to cause auroral emission, a phenomenon more typically associated with fast-moving solar wind particles striking a planetary atmosphere near a magnetic pole. Some astronomers had suggested that the impacts might have a noticeable effect on the Io torus, a torus of high-energy particles connecting Jupiter with the highly volcanic moon Io. High resolution spectroscopic studies found that variations in the ion density, rotational velocity, and temperatures at the time of impact and afterwards were within the normal limits. Voyager 2 failed to detect anything with calculations, showing that the fireballs were just below the craft's limit of detection; no abnormal levels of UV radiation or radio signals were registered after the blast. Ulysses also failed to detect any abnormal radio frequencies. Post-impact analysis Several models were devised to compute the density and size of Shoemaker–Levy 9. Its average density was calculated to be about ; the breakup of a much less dense comet would not have resembled the observed string of objects. The size of the parent comet was calculated to be about in diameter. These predictions were among the few that were actually confirmed by subsequent observation. One of the surprises of the impacts was the small amount of water revealed compared to prior predictions. Before the impact, models of Jupiter's atmosphere had indicated that the break-up of the largest fragments would occur at atmospheric pressures of anywhere from 30 kilopascals to a few tens of megapascals (from 0.3 to a few hundred bar), with some predictions that the comet would penetrate a layer of water and create a bluish shroud over that region of Jupiter. Astronomers did not observe large amounts of water following the collisions, and later impact studies found that fragmentation and destruction of the cometary fragments in a meteor air burst probably occurred at much higher altitudes than previously expected, with even the largest fragments being destroyed when the pressure reached , well above the expected depth of the water layer. The smaller fragments were probably destroyed before they even reached the cloud layer. Longer-term effects The visible scars from the impacts could be seen on Jupiter for many months. They were extremely prominent, and observers described them as more easily visible than the Great Red Spot. A search of historical observations revealed that the spots were probably the most prominent transient features ever seen on the planet, and that although the Great Red Spot is notable for its striking color, no spots of the size and darkness of those caused by the SL9 impacts had ever been recorded before, or since. Spectroscopic observers found that ammonia and carbon disulfide persisted in the atmosphere for at least fourteen months after the collisions, with a considerable amount of ammonia being present in the stratosphere as opposed to its normal location in the troposphere. Counterintuitively, the atmospheric temperature dropped to normal levels much more quickly at the larger impact sites than at the smaller sites: at the larger impact sites, temperatures were elevated over a region wide, but dropped back to normal levels within a week of the impact. At smaller sites, temperatures 10 K (10 °C; 18 °F) higher than the surroundings persisted for almost two weeks. Global stratospheric temperatures rose immediately after the impacts, then fell to below pre-impact temperatures 2–3 weeks afterwards, before rising slowly to normal temperatures. Frequency of impacts SL9 is not unique in having orbited Jupiter for a time; five comets, including 82P/Gehrels, 147P/Kushida–Muramatsu, and 111P/Helin–Roman–Crockett, are known to have been temporarily captured by the planet. Cometary orbits around Jupiter are unstable, as they will be highly elliptical and likely to be strongly perturbed by the Sun's gravity at apojove (the farthest point on the orbit from the planet). By far the most massive planet in the Solar System, Jupiter can capture objects relatively frequently, but the size of SL9 makes it a rarity: one post-impact study estimated that comets in diameter impact the planet once in approximately 500 years and those in diameter do so just once in every 6,000 years. There is very strong evidence that comets have previously been fragmented and collided with Jupiter and its satellites. During the Voyager missions to the planet, planetary scientists identified 13 crater chains on Callisto and three on Ganymede, the origin of which was initially a mystery. Crater chains seen on the Moon often radiate from large craters, and are thought to be caused by secondary impacts of the original ejecta, but the chains on the Jovian moons did not lead back to a larger crater. The impact of SL9 strongly implied that the chains were due to trains of disrupted cometary fragments crashing into the satellites. Impact of July 19, 2009 On July 19, 2009, exactly 15 years after the SL9 impacts, a new black spot about the size of the Pacific Ocean appeared in Jupiter's southern hemisphere. Thermal infrared measurements showed the impact site was warm and spectroscopic analysis detected the production of excess hot ammonia and silica-rich dust in the upper regions of Jupiter's atmosphere. Scientists have concluded that another impact event had occurred, but this time a more compact and stronger object, probably a small undiscovered asteroid, was the cause. Jupiter's role in protection of the inner Solar System The events of SL9's interaction with Jupiter greatly highlighted Jupiter's role in protecting the inner planets from both interstellar and in-system debris by acting as a "cosmic vacuum cleaner" for the Solar System (Jupiter barrier). The planet's strong gravitational influence attracts many small comets and asteroids and the rate of cometary impacts on Jupiter is thought to be between 2,000 and 8,000 times higher than the rate on Earth. The extinction of the non-avian dinosaurs at the end of the Cretaceous period is generally thought to have been caused by the Cretaceous–Paleogene impact event, which created the Chicxulub crater, demonstrating that cometary impacts are indeed a serious threat to life on Earth. Astronomers have speculated that without Jupiter's immense gravity, extinction events might have been more frequent on Earth and complex life might not have been able to develop. This is part of the argument used in the Rare Earth hypothesis. In 2009, it was shown that the presence of a smaller planet at Jupiter's position in the Solar System might increase the impact rate of comets on the Earth significantly. A planet of Jupiter's mass still seems to provide increased protection against asteroids, but the total effect on all orbital bodies within the Solar System is unclear. This and other recent models call into question the nature of Jupiter's influence on Earth impacts.
Physical sciences
Notable comets
Astronomy
6799
https://en.wikipedia.org/wiki/COBOL
COBOL
COBOL (; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural, and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. Many large financial institutions were developing new systems in the language as late as 2006, but most programming in COBOL today is purely to maintain existing applications. Programs are being moved to new platforms, rewritten in modern languages, or replaced with other software. COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC, designed by Grace Hopper. It was created as part of a U.S. Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Defense Department promptly pressured computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has been revised five times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2023. COBOL statements have prose syntax such as , which was designed to be self-documenting and highly readable to non-programmers such as management. However, it is verbose and uses over 300 reserved words compared to the succinct and mathematically inspired syntax of other languages. The COBOL code is split into four divisions (identification, environment, data, and procedure), containing a rigid hierarchy of sections, paragraphs, and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions, and just one class. Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text. COBOL has been criticized for its verbosity, design process, and poor support for structured programming. These weaknesses result in monolithic programs that are hard to comprehend as a whole, despite their local readability. For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years, many COBOL operations have been moved to cloud computing. History and specification Background In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost US$600,000. At a time when new programming languages were proliferating, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster. On 8 April 1959, Mary K. Hawes, a computer scientist at Burroughs Corporation, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages. Representatives included Grace Hopper (inventor of the English-like data processing language FLOW-MATIC), Jean Sammet, and Saul Gorn. At the April meeting, the group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they "thoroughly understood" the DoD's problems. The DoD operated 225 computers, had 175 more on order, and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs, and ease modernization. Charles Phillips agreed to sponsor the meeting, and tasked the delegation with drafting the agenda. COBOL 60 On 28 and 29 May 1959 (exactly one year after the Zürich ALGOL 58 meeting), a meeting was held at the Pentagon to discuss the creation of a common programming language for business. It was attended by 41 people and was chaired by Phillips. The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs. Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent, and be easy to use, even at the expense of power. The meeting resulted in the creation of a steering committee and short, intermediate, and long-range committees. The short-range committee was given until September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees. Their official mission, however, was to identify the strengths and weaknesses of existing programming languages; it did not explicitly direct them to create a new language. The deadline was met with disbelief by the short-range committee. One member, Betty Holberton, described the three-month deadline as "gross optimism" and doubted that the language really would be a stopgap. The steering committee met on 4 June and agreed to name the entire activity the Committee on Data Systems Languages, or CODASYL, and to form an executive committee. The short-range committee members represented six computer manufacturers and three government agencies. The computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The government agencies were the U.S. Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology). The committee was chaired by Joseph Wegstein of the U.S. National Bureau of Standards. Work began by investigating data descriptions, statements, existing applications, and user experiences. The committee mainly examined the FLOW-MATIC, AIMACO, and COMTRAN programming languages. The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes. FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee. FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands, and the separation of data descriptions and instructions. Hopper is sometimes called "the mother of COBOL" or "the grandmother of COBOL", although Jean Sammet, a lead designer of COBOL, said Hopper "was not the mother, creator, or developer of Cobol." IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC by a short-range committee made up of colleagues of Grace Hopper. Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process, and Jean Sammet said in 1981 that there had been a "strong anti-IBM bias" from some committee members (herself included). In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English. In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out.". Features from COMTRAN incorporated into COBOL included formulas, the clause, an improved IF statement, which obviated the need for GO TOs, and a more robust file management system. The usefulness of the committee's work was a subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple. Controversial features included those some considered useless or too advanced for data processing users. Such features included Boolean expressions, formulas, and table (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time), and functions (thought of as purely mathematical and of no use in data processing). The specifications were presented to the executive committee on 4 September. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions," and Bob Bemer later described them as a "hodgepodge." The committee was given until December to improve it. At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language), and "COCOSYL" (Common Computer Systems Language). It is unclear who coined the name "COBOL", although Bob Bemer later claimed it had been his suggestion. In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it. This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste. It soon became apparent that the committee was too large to make any further progress quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure. A subcommittee was formed to analyze existing languages and was made up of six individuals: William Selden and Gertrude Tierney of IBM, Howard Bromberg and Howard Discount of RCA, Vernon Reeves and Jean E. Sammet of Sylvania Electric Products. The subcommittee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification. The specifications were approved by the executive committee on 8 January 1960, and sent to the government printing office, which printed them as COBOL 60. The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers. The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications. During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register, and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL. Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved. The relative influence of the languages that were used is still indicated in the recommended advisory printed in all COBOL reference manuals: COBOL-61 to COBOL-65 Many logical flaws were found in COBOL 60, leading General Electric's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee performed a total cleanup, and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained. COBOL is a difficult language to write a compiler for, due to the large syntax and many optional elements within syntactic constructs, as well as the need to generate efficient code for a language with many possible data representations, implicit type conversions, and necessary set-ups for I/O operations. Early COBOL compilers were primitive and slow. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91. In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease. The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables. COBOL-68 Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced USA Standard COBOL X3.23 in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972. COBOL-74 By 1970, COBOL had become the most widely used programming language in the world. Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970, and 1973, including changes such as new inter-program communication, debugging, and file merging facilities, as well as improved string handling and library inclusion features. Although CODASYL was independent of the ANSI committee, the CODASYL Journal of Development was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee. The Programming Language Committee was not well-known, however. The vice president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It also lacked the funds to make public documents, such as minutes of meetings and change proposals, freely available. In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the statement and the segmentation module. Deleted features included the statement, the statement (which was replaced by ), and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL but was reinstated before the standard was published. ISO later adopted the updated standard in 1978. COBOL-85 In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-president of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources". Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user". During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard. ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI made most of the proposals. In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, and noted that conversion of COBOL-74 programs posed few problems. The new EVALUATE statement and inline PERFORM were particularly well received and improved productivity, thanks to simplified control flow and debugging. The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed. In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985. Sixty features were changed or deprecated and 115 were added, such as: Scope terminators (END-IF, END-PERFORM, END-READ, etc.) Nested subprograms CONTINUE, a no-operation statement EVALUATE, a switch statement INITIALIZE, a statement that can set groups of data to their default values Inline PERFORM loop bodies – previously, loop bodies had to be specified in a separate procedure Reference modification, which allows access to substrings I/O status codes. The new standard was adopted by all national standard bodies, including ANSI. Two amendments followed in 1989 and 1993. The first amendment introduced intrinsic functions and the other provided corrections. COBOL 2002 and object-oriented COBOL In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs. In the early 1990s, work began on adding object-orientation in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk. The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002. Fujitsu/GTSoftware, Micro Focus introduced object-oriented COBOL compilers targeting the .NET Framework. There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85. These other features included: Free-form code User-defined functions Recursion Locale-based processing Support for extended character sets such as Unicode Floating-point and binary data types (until then, binary items were truncated based on their declaration's base-10 specification) Portable arithmetic results Bit and Boolean data types Pointers and syntax for getting and freeing storage The for text-based user interfaces The facility Improved interoperability with other programming languages and framework environments such as .NET and Java. Three corrigenda were published for the standard: two in 2006 and one in 2009. COBOL 2014 Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL. COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced. COBOL 2014 includes the following changes: Portable arithmetic results have been replaced by IEEE 754 data types Major features have been made optional, such as the VALIDATE facility, the report writer and the screen-handling facility Method overloading Dynamic capacity tables (a feature dropped from the draft of COBOL 2002) COBOL 2023 The COBOL 2023 standard added a few new features: Asynchronous messaging syntax using the SEND and RECEIVE statements A transaction processing facility with COMMIT and ROLLBACK XOR logical operator The CONTINUE statement can be extended as to pause the program for a specified duration A DELETE FILE statement LINE SEQUENTIAL file organization Defined infinite looping with PERFORM UNTIL EXIT SUBSTITUTE intrinsic function allowing for substring substitution of different length CONVERT function for base-conversion Boolean shifting operators There is as yet no known complete implementation of this standard. Legacy COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, NonStop OS, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually. Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. Some studies attribute as much as "24% of Y2K software repair costs to Cobol". After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of COBOL in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted". In 2006 and 2012, Computerworld surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said that they would do so if not for the expense of rewriting legacy code. Alternatively, some businesses have migrated their COBOL programs from mainframes to cheaper, faster hardware. Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. Reuters reported in 2017 that 43% of banking systems still used COBOL with over 220 billion lines of COBOL code in use. By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated. During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process was put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act. Features Syntax COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as   or more concisely as    or  . More complex conditions can be abbreviated by removing repeated conditions and variables. For example,    can be shortened to . To support this syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more grammatically appropriate statements and clauses; e.g., the and keywords can be used interchangeably, as can and , and and . Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see ) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. ) and strings (e.g. ). Separators include the space character and commas and semi-colons followed by a space. A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs. Metalanguage COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. As an example, consider the following description of an ADD statement: This description permits the following variants: ADD 1 TO x ADD 1, a, b TO x ROUNDED, y, z ROUNDED ADD a, b TO c ON SIZE ERROR DISPLAY "Error" END-ADD ADD a TO b NOT SIZE ERROR DISPLAY "No error" ON SIZE ERROR DISPLAY "Error" Code format The height of COBOL's popularity coincided with the era of keypunch machines and punched cards. The program itself was written onto punched cards, then read in and compiled, and the data fed into the program was sometimes on cards as well. COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were: In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column. COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator. Identification division The identification division identifies the following code entity and contains the definition of a class or interface. Object-oriented programming Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the statement, which acts similarly to , or through inline method invocation, which is analogous to using functions. *> These are equivalent. INVOKE my-class "foo" RETURNING var MOVE my-class::"foo" TO var *> Inline method invocation COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a clause, which leaves external code no way to access it. Method overloading was added in COBOL 2014. Environment division The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information. Files COBOL supports three file formats, or : sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, , record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. Other implementations are Record Management Services on OpenVMS and Enscribe on HPE NonStop (Tandem). Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access. A common non-standard extension is the organization, used to process text files. Records in a file are terminated by a newline and may be of varying length. Data division The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces. Aggregated data Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called . Items that have subordinate aggregate data are called ; those that do not are called . Level-numbers used to describe standard data items are between 1 and 49. 01 some-record. *> Aggregate group record item 05 num PIC 9(10). *> Elementary item 05 the-date. *> Aggregate (sub)group record item 10 the-year PIC 9(4). *> Elementary item 10 the-month PIC 99. *> Elementary item 10 the-day PIC 99. *> Elementary item In the above example, elementary item and group item are subordinate to the record , while elementary items , , and are part of the group item . Subordinate items can be disambiguated with the (or ) keyword. For example, consider the example code above along with the following example: 01 sale-date. 05 the-year PIC 9(4). 05 the-month PIC 99. 05 the-day PIC 99. The names , , and are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the group, the programmer would use (or the equivalent ). This syntax is similar to the "dot notation" supported by most contemporary languages. Other data levels A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated , is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use. 01 customer-record. 05 cust-key PIC X(10). 05 cust-name. 10 cust-first-name PIC X(30). 10 cust-last-name PIC X(30). 05 cust-dob PIC 9(8). 05 cust-balance PIC 9(7)V99. 66 cust-personal-details RENAMES cust-name THRU cust-dob. 66 cust-all-details RENAMES cust-name THRU cust-balance. A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, and , which are non-group data items that are independent of (not subordinate to) any other data items: 77 property-name PIC X(80). 77 sales-region PIC 9(5). An 88 level-number declares a (a so-called 88-level) which is true when its parent data item contains one of the values specified in its clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the data item. When the data item contains a value of , the condition-name is true, whereas when it contains a value of or , the condition-name is true. If the data item contains some other value, both of the condition-names are false. 01 wage-type PIC X. 88 wage-is-hourly VALUE "H". 88 wage-is-yearly VALUE "S", "Y". Data types Standard COBOL provides the following data types: Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type. PICTURE clause A (or ) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a indicates a decimal digit, and an indicates that the item is signed. Other picture characters (called and characters) specify how an item should be formatted. For example, a series of characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, is equivalent to . Picture specifications containing only digit () and sign () characters define purely data items, while picture specifications containing alphabetic () or alphanumeric () characters define data items. The presence of other formatting characters define or data items. USAGE clause The clause declares the format in which data is stored. Depending on the data type, it can either complement or be used instead of a clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are: Binary, where a minimum size is either specified by the PICTURE clause or by a USAGE clause such as BINARY-LONG , where data may be stored in whatever format the implementation provides; often equivalent to   , the default format, where data is stored as a string Floating-point, in either an implementation-dependent format or according to IEEE 754 , where data is stored as a string using an extended character set , where data is stored in the smallest possible decimal format (typically packed binary-coded decimal) Report writer The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings. Reports are associated with report files, which are files which may only be written to through report writer statements. FD report-out REPORT sales-report. Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical . Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records: RD sales-report PAGE LIMITS 60 LINES FIRST DETAIL 3 CONTROLS seller-name. 01 TYPE PAGE HEADING. 03 COL 1 VALUE "Sales Report". 03 COL 74 VALUE "Page". 03 COL 79 PIC Z9 SOURCE PAGE-COUNTER. 01 sales-on-day TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "Sales on". 03 COL 12 PIC 99/99/9999 SOURCE sales-date. 03 COL 21 VALUE "were". 03 COL 26 PIC $$$$9.99 SOURCE sales-amount. 01 invalid-sales TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "INVALID RECORD:". 03 COL 19 PIC X(34) SOURCE sales-record. 01 TYPE CONTROL HEADING seller-name, LINE + 2. 03 COL 1 VALUE "Seller:". 03 COL 9 PIC X(30) SOURCE seller-name. The above report description describes the following layout: Sales Report Page 1 Seller: Howard Bromberg Sales on 10/12/2008 were $1000.00 Sales on 12/12/2008 were $0.00 Sales on 13/12/2008 were $31.47 INVALID RECORD: Howard Bromberg XXXXYY Seller: Howard Discount ... Sales Report Page 12 Sales on 08/05/2014 were $543.98 INVALID RECORD: William Selden 12O52014FOOFOO Sales on 30/05/2014 were $0.00 Four statements control the report writer: , which prepares the report writer for printing; , which prints a report group; , which suppresses the printing of a report group; and , which terminates report processing. For the above sales report example, the procedure division might look like this: OPEN INPUT sales, OUTPUT report-out INITIATE sales-report PERFORM UNTIL 1 <> 1 READ sales AT END EXIT PERFORM END-READ VALIDATE sales-record IF valid-record GENERATE sales-on-day ELSE GENERATE invalid-sales END-IF END-PERFORM TERMINATE sales-report CLOSE sales, report-out . Use of the Report Writer facility tends to vary considerably; some organizations use it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime. Procedure division Procedures The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections. Execution goes down through the procedures of a program until it is terminated. To use procedures as subroutines, the verb is used. A statement somewhat resembles a procedure call in a newer languages in the sense that execution returns to the code following the statement at the end of the called code; however, it does not provide a mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like , then control returns at the end of the called procedure. However, is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the construct: PROCEDURE so-and-so. PERFORM ALPHA PERFORM ALPHA THRU GAMMA STOP RUN. ALPHA. DISPLAY 'A'. BETA. DISPLAY 'B'. GAMMA. DISPLAY 'C'. The output of this program will be: "A A B C". also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being 'ed may execute a statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a invocation that was called earlier but has not yet completed, the COBOL 2002 standard stipulates that the behavior is undefined. The reason is that COBOL, rather than a "return address", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialized to the start address of the procedure that comes next in the program text so that, if no statements happen, control flows from top to bottom through the program. But when a statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways. The following example (taken from ) illustrates the problem: LABEL1. DISPLAY '1' PERFORM LABEL2 THRU LABEL3 STOP RUN. LABEL2. DISPLAY '2' PERFORM LABEL3 THRU LABEL4. LABEL3. DISPLAY '3'. LABEL4. DISPLAY '4'. One might expect that the output of this program would be "1 2 3 4 3": After displaying "2", the second causes "3" and "4" to be displayed, and then the first invocation continues on with "3". In traditional COBOL implementations, this is not the case. Rather, the first statement sets the continuation address at the end of so that it will jump back to the call site inside . The second statement sets the return at the end of but does not modify the continuation address of , expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of , it jumps back to the outer statement, and the program stops having printed just "1 2 3". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two statements do not interfere with each other and the output is indeed "1 2 3 4 3". Therefore, the behavior in such cases is not only (perhaps) surprising, it is also not portable. A special consequence of this limitation is that cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from ): MOVE 1 TO A PERFORM LABEL STOP RUN. LABEL. DISPLAY A IF A < 3 ADD 1 TO A PERFORM LABEL END-IF DISPLAY 'END'. One might expect that the output is "1 2 3 END END END", and in fact that is what some COBOL compilers will produce. But other compilers, like IBM COBOL, will produce code that prints "1 2 3 END END END END ..." and so on, printing "END" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to . Statements COBOL 2014 has 47 statements (also called ), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section. Control flow COBOL's conditional statements are and . is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe: EVALUATE TRUE ALSO desired-speed ALSO current-speed WHEN lid-closed ALSO min-speed THRU max-speed ALSO LESS THAN desired-speed PERFORM speed-up-machine WHEN lid-closed ALSO min-speed THRU max-speed ALSO GREATER THAN desired-speed PERFORM slow-down-machine WHEN lid-open ALSO ANY ALSO NOT ZERO PERFORM emergency-stop WHEN OTHER CONTINUE END-EVALUATE The statement is used to define loops which are executed a condition is true (not true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). and call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available). unloads subprograms from memory. causes the program to jump to a specified procedure. The statement is a return statement and the statement stops the program. The statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure. Exceptions are raised by a statement and caught with a handler, or , defined in the portion of the procedure division. Declaratives are sections beginning with a statement which specify the errors to handle. Exceptions can be names or objects. is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the . Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected. I/O File I/O is handled by the self-describing , , , and statements along with a further three: , which updates a record; , which selects subsequent records to access by finding a record with a certain key; and , which releases a lock on the last record accessed. User interaction is done using and . Data manipulation The following verbs manipulate data: , which sets data items to their default values. , which assigns values to data items ; MOVE CORRESPONDING assigns corresponding like-named fields. , which has 15 formats: it can modify indices, assign object references and alter table capacities, among other functions. , , , , and , which handle arithmetic (with assigning the result of a formula to a variable). and , which handle dynamic memory. , which validates and distributes data as specified in an item's description in the data division. and , which concatenate and split strings, respectively. , which tallies or replaces instances of specified substrings within a string. , which searches a table for the first entry satisfying a condition. Files and tables are sorted using and the verb merges and sorts files. The verb provides records to sort and retrieves sorted records in order. Scope termination Some statements, such as and , may themselves contain statements. Such statements may be terminated in two ways: by a period (), which terminates all unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement. *> Terminator period ("implicit termination") IF invalid-record IF no-more-records NEXT SENTENCE ELSE READ record-file AT END SET no-more-records TO TRUE. *> Scope terminators ("explicit termination") IF invalid-record IF no-more-records CONTINUE ELSE READ record-file AT END SET no-more-records TO TRUE END-READ END-IF END-IF Nested statements terminated with a period are a common source of bugs. For example, examine the following code: IF x DISPLAY y. DISPLAY z. Here, the intent is to display y and z if condition x is true. However, z will be displayed whatever the value of x because the IF statement is terminated by an erroneous period after . Another bug is a result of the dangling else problem, when two IF statements can associate with an ELSE. IF x IF y DISPLAY a ELSE DISPLAY b. In the above fragment, the ELSE associates with the    statement instead of the    statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require    to be placed after the inner IF. Self-modifying code The original (1959) COBOL specification supported the infamous    statement, for which many compilers generated self-modifying code. X and Y are procedure labels, and the single    statement in procedure X executed after such an statement means    instead. Many compilers still support it, but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002. The statement was poorly regarded because it undermined "locality of context" and made a program's overall logic difficult to comprehend. As textbook author Daniel D. McCracken wrote in 1976, when "someone who has never seen the program before must become familiar with it as quickly as possible, sometimes under critical time pressure because the program has failed ... the sight of a GO TO statement in a paragraph by itself, signaling as it does the existence of an unknown number of ALTER statements at unknown locations throughout the program, strikes fear in the heart of the bravest programmer." Hello, world A "Hello, World!" program in COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. hello-world. PROCEDURE DIVISION. DISPLAY "Hello, world!" . When the now famous "Hello, World!" program example in The C Programming Language was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, with an empty DATA DIVISION, was tested using Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley. In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters. //COBUCLG JOB (001),'COBOL BASE TEST', 00010000 // CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1) 00020000 //BASETEST EXEC COBUCLG 00030000 //COB.SYSIN DD * 00040000 00000* VALIDATION OF BASE COBOL INSTALL 00050000 01000 IDENTIFICATION DIVISION. 00060000 01100 PROGRAM-ID. 'HELLO'. 00070000 02000 ENVIRONMENT DIVISION. 00080000 02100 CONFIGURATION SECTION. 00090000 02110 SOURCE-COMPUTER. GNULINUX. 00100000 02120 OBJECT-COMPUTER. HERCULES. 00110000 02200 SPECIAL-NAMES. 00120000 02210 CONSOLE IS CONSL. 00130000 03000 DATA DIVISION. 00140000 04000 PROCEDURE DIVISION. 00150000 04100 00-MAIN. 00160000 04110 DISPLAY 'HELLO, WORLD' UPON CONSL. 00170000 04900 STOP RUN. 00180000 //LKED.SYSLIB DD DSNAME=SYS1.COBLIB,DISP=SHR 00190000 // DD DSNAME=SYS1.LINKLIB,DISP=SHR 00200000 //GO.SYSPRINT DD SYSOUT=A 00210000 // 00220000 After submitting the JCL, the MVS console displayed: 19.52.48 JOB 3 $HASP100 COBUCLG ON READER1 COBOL BASE TEST 19.52.48 JOB 3 IEF677I WARNING MESSAGE(S) FOR JOB COBUCLG ISSUED 19.52.48 JOB 3 $HASP373 COBUCLG STARTED - INIT 1 - CLASS A - SYS BSP1 19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING 19.52.48 JOB 3 IEC130I SYSLIB DD STATEMENT MISSING 19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING 19.52.48 JOB 3 IEFACTRT - Stepname Procstep Program Retcode 19.52.48 JOB 3 COBUCLG BASETEST COB IKFCBL00 RC= 0000 19.52.48 JOB 3 COBUCLG BASETEST LKED IEWL RC= 0000 19.52.48 JOB 3 +HELLO, WORLD 19.52.48 JOB 3 COBUCLG BASETEST GO PGM=*.DD RC= 0000 19.52.48 JOB 3 $HASP395 COBUCLG ENDED Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output. The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL. Reception Lack of structure In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published in 1975 entitled "How do we tell truths that might hurt?", in which he was critical of COBOL and several other contemporary languages; remarking that "the use of COBOL cripples the mind". In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be "written by programmers that have never had the benefit of structured COBOL taught well", arguing that the issue was primarily one of training. One cause of spaghetti code was the statement. Attempts to remove s from COBOL code, however, resulted in convoluted programs and reduced code quality. s were largely replaced by the statement and procedures, which promoted modular programming and gave easy access to powerful looping facilities. However, could be used only with procedures so loop bodies were not located where they were used, making programs harder to understand. COBOL programs were infamous for being monolithic and lacking modularization. COBOL code could be modularized only through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake. Another complication stemmed from the ability to a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule. This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms. Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included. Nevertheless, much important legacy COBOL software uses unstructured code, which has become practically unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways. Compatibility issues COBOL was intended to be a highly portable, "common" language. However, by 2001, around 300 dialects had been created. One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 possible variants. COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard. As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs. Verbose syntax COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance. COBOL was also intended to be easy for programmers to learn and use, while still being readable to non-technical staff such as managers. The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code and the main changes in COBOL-85 were there to help ease maintenance. Jean Sammet, a short-range committee member, noted that "little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL" which she attributed to COBOL's verbose syntax. Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966). Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that "academics ... hate COBOL" and that computer science graduates "had 'hate COBOL' drilled into them". By the mid-1980s, there was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems. In 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java. Ten years later, a poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it. Concerns about the design process Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was "little control" over the development process and that it was "plagued by discontinuity of personnel and ... a lack of talent." Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence. COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped, COBOL 2002 was five years late, and COBOL 2014 was six years late. To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard. Influences on other languages COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the REDEFINES clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays. PICTURE data declarations were incorporated into PL/I, with minor changes. COBOL's facility, although considered "primitive", influenced the development of include directives. The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems. Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular.
Technology
"Historical" languages
null
6804
https://en.wikipedia.org/wiki/Charge-coupled%20device
Charge-coupled device
A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging. Overview In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors. History The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices. In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices". The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s. The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent () on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971. The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2D 100 × 100 pixel device. Peter Dillon, a scientist at Kodak Research Labs, invented the first color CCD image sensor by overlaying a color filter array on this Fairchild 100 x 100 pixel Interline CCD starting in 1974. Steven Sasson, an electrical engineer working for the Kodak Apparatus Division, invented a digital still camera using this same Fairchild CCD in 1975. The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981. The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array ( pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. Subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981. Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors. In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers". Basics of operation In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking). An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing. Detailed physics of operation Charge generation Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified: photo-generation (up to 95% of quantum efficiency), generation in the depletion region, generation at the surface, and generation in the neutral bulk. The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 105 electrons per pixel. CCDs are normally susceptible to ionizing radiation and energetic particles which causes noise in the output of the CCD, and this must be taken into consideration in satellites using CCDs. Design and manufacturing The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly p doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device: This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD. The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate. Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region. Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions. Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible). The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device. CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices. Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets. Architecture The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering. In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out. With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much. The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design. The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device. CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light. Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers. Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels. Frame transfer CCD The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness. The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as rolling shutter effect, making fast moving objects appear distorted. In addition, the CCD cannot be used to collect light while it is being read out. A faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level. A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures. The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed. Intensified charge-coupled device An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD. An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens. An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras. Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds. ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around . This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application. ICCDs are used in night vision devices and in various scientific applications. Electron-multiplying CCD An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the in 1973 by George E. Smith/Bell Telephone Laboratories. EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. This effect is referred to as the Excess Noise Factor (ENF). However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation: where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g. For very large numbers of input electrons, this complex distribution function converges towards a Gaussian. Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of . This cooling system adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues. The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs. In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device. Use in astronomy Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications. Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light Dark Matter searches and neutrino measurements. The Hubble Space Telescope, in particular, has a highly developed series of steps ("data reduction pipeline") to convert the raw CCD data to useful images. CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them. An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky. The Gaia space telescope is another instrument operating in this mode, rotating about its axis at a constant rate of 1 revolution in 6 hours and scanning a 360° by 0.5° strip on the sky during this time; a star traverses the entire focal plane in about 40 seconds (effective exposure time). In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers. Color cameras Digital color cameras, including the digital color cameras in smartphones, generally use an integral color image sensor, which has a color filter array fabricated on top of the monochrome pixels of the CCD. The most popular CFA pattern is known as the Bayer filter, which is named for its inventor, Kodak scientist Bryce Bayer. In the Bayer pattern, each square of four pixels has one filtered red, one blue, and two green pixels (the human eye has greater acuity for luminance, which is more heavily weighted in green than in either red or blue). As a result, the luminance information is collected in each row and column using a checkerboard pattern, and the color resolution is lower than the luminance resolution. Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and Bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location. For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled). Sensor sizes Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes. Blooming When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking. Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity.
Technology
Optical instruments
null
6806
https://en.wikipedia.org/wiki/Computer%20memory
Computer memory
Computer memory stores information, such as data and programs, for immediate use in the computer. The term memory is often synonymous with the terms RAM, main memory, or primary storage. Archaic synonyms for main memory include core (for magnetic core memory) and store. Main memory operates at a high speed compared to mass storage which is slower but less expensive per bit and higher in capacity. Besides storing opened programs and data being actively processed, computer memory serves as a mass storage cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it is not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called virtual memory. Modern computer memory is implemented as semiconductor memory, where data is stored within memory cells built from MOS transistors and other components on an integrated circuit. There are two main kinds of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM, and EEPROM memory. Examples of volatile memory are dynamic random-access memory (DRAM) used for primary storage and static random-access memory (SRAM) used mainly for CPU cache. Most semiconductor memory is organized into memory cells each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and a multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of N bits, making it possible to store 2N words in the memory. History In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes. The next significant advance in computer memory came with acoustic delay-line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode-ray tubes, Fred Williams invented the Williams tube, which was the first random-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and was less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Magnetic-core memory allowed for memory recall after power loss. It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialized with the Whirlwind I computer in 1953. Magnetic-core memory was the dominant form of memory until the development of MOS semiconductor memory in the 1960s. The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s using bipolar transistors. Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. In the same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remained larger and more expensive and did not displace magnetic-core memory until the late 1960s. MOS memory The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s. The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95. Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992. The term memory is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987. Developments in technology and economies of scale have made possible so-called (VLM) computers. Volatility categories Volatile memory Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). DRAM dominates for desktop system memory. SRAM is used for CPU cache. SRAM is also found in small embedded systems requiring little memory. SRAM retains its contents as long as the power is connected and may use a simpler interface, but commonly uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. Non-volatile memory Non-volatile memory can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as magnetic drum, paper tape and punched cards. Non-volatile memory technologies under development include ferroelectric RAM, programmable metallization cell, Spin-transfer torque magnetic RAM, SONOS, resistive random-access memory, racetrack memory, Nano-RAM, 3D XPoint, and millipede memory. Semi-volatile memory A third category of memory is semi-volatile. The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide the high performance and durability associated with volatile memories while providing some benefits of non-volatile memory. For example, some non-volatile memory types experience wear when written. A worn cell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. After a period of time without update, the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits. As a second example, an STT-RAM can be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications, the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold. The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types, such as nvSRAM, which combines SRAM and a non-volatile memory on the same chip, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed before the copy occurs, the data is lost. Another example is battery-backed RAM, which uses an external battery to power the memory device in case of external power loss. If power is off for an extended period of time, the battery may run out, resulting in data loss. Management Proper management of memory is vital for a computer system to operate properly. Modern operating systems have complex systems to properly manage memory. Failure to do so can lead to bugs or slow performance. Bugs Improper management of memory is a common cause of bugs and security vulnerabilities, including the following types: A memory leak occurs when a program requests memory from the operating system and never returns the memory when it is done with it. A program with this bug will gradually require more and more memory until the program fails as the operating system runs out. A segmentation fault results when a program tries to access memory that it does not have permission to access. Generally, a program doing so will be terminated by the operating system. A buffer overflow occurs when a program writes data to the end of its allocated space and then continues to write data beyond this to memory that has been allocated for other purposes. This may result in erratic program behavior, including memory access errors, incorrect results, a crash, or a breach of system security. They are thus the basis of many software vulnerabilities and can be maliciously exploited. Virtual memory Virtual memory is a system where physical memory is managed by the operating system typically with assistance from a memory management unit, which is part of many modern CPUs. It allows multiple types of memory to be used. For example, some data can be stored in RAM while other data is stored on a hard drive (e.g. in a swapfile), functioning as an extension of the cache hierarchy. This offers several advantages. Computer programmers no longer need to worry about where their data is physically stored or whether the user's computer will have enough memory. The operating system will place actively used data in RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving data from RAM to disk and back than it does accomplishing tasks; this is known as thrashing. Protected memory Protected memory is a system where each program is given an area of memory to use and is prevented from going outside that range. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated (or otherwise restricted or redirected). This way, only the offending program crashes, and other programs are not affected by the misbehavior (whether accidental or intentional). Use of protected memory greatly enhances both the reliability and security of a computer system. Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system's memory is corrupted, the entire computer system may crash and need to be rebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers. It may also be used benignly by desirable programs which are intended to modify other programs, debuggers, for example, to insert breakpoints or hooks.
Technology
Data storage and memory
null
6813
https://en.wikipedia.org/wiki/Chandrasekhar%20limit
Chandrasekhar limit
The Chandrasekhar limit () is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about (). The limit was named after Subrahmanyan Chandrasekhar. White dwarfs resist gravitational collapse primarily through electron degeneracy pressure, compared to main sequence stars, which resist collapse through thermal pressure. The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Physics Normal stars fuse gravitationally compressed hydrogen into helium, generating vast amounts of heat. As the hydrogen is consumed, the stars' core compresses further allowing the helium and heavier nuclei to fuse ultimately resulting in stable iron nuclei, a process called stellar evolution. The next step depends upon the mass of the star. Stars below the Chandrasekhar limit become stable white dwarf stars, remaining that way throughout the rest of the history of the universe (assuming the absence of external forces). Stars above the limit can become neutron stars or black holes. The Chandrasekhar limit is a consequence of competition between gravity and electron degeneracy pressure. Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form , where is the pressure, is the mass density, and is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass. As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form . This yields a polytrope of index 3, which has a total mass, , depending only on . For a fully relativistic treatment, the equation of state used interpolates between the equations for small and for large . When this is done, the model radius still decreases with mass, but becomes zero at . This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses. Calculated values for the limit vary depending on the nuclear composition of the mass. Chandrasekhar gives the following expression, based on the equation of state for an ideal Fermi gas: where: is the reduced Planck constant is the speed of light is the gravitational constant is the average molecular weight per electron, which depends upon the chemical composition of the star is the mass of the hydrogen atom is a constant connected with the solution to the Lane–Emden equation As is the Planck mass, the limit is of the order of The limiting mass can be obtained formally from the Chandrasekhar's white dwarf equation by taking the limit of large central density. A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation. History In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately . In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for ). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community. A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. The existence of a related limit, based on the conceptual breakthrough of combining relativity with Fermi degeneracy, was first established in separate papers published by Wilhelm Anderson and E. C. Stoner for a uniform density star in 1929. Eric G. Blackman wrote that the roles of Stoner and Anderson in the discovery of mass limits were overlooked when Freeman Dyson wrote a biography of Chandrasekhar. Michael Nauenberg claims that Stoner established the mass limit first. The priority dispute has also been discussed at length by Virginia Trimble who writes that: "Chandrasekhar famously, perhaps even notoriously did his critical calculation on board ship in 1930, and ... was not aware of either Stoner's or Anderson's work at the time. His work was therefore independent, but, more to the point, he adopted Eddington's polytropes for his models which could, therefore, be in hydrostatic equilibrium, which constant density stars cannot, and real ones must be." This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass. Chandrasekhar–Eddington dispute Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied: Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law universally applicable, even for large . Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar. Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar. In Miller's view: However, Chandrasekhar chose to move on, leaving the study of stellar structure to focus on stellar dynamics. In 1983 in recognition for his work, Chandrasekhar shared a Nobel prize "for his theoretical studies of the physical processes of importance to the structure and evolution of the stars" with William Alfred Fowler. Applications The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse. If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of (100 foes). Most of this energy is carried away by the emitted neutrinos and the kinetic energy of the expanding shell of gas; only about 1% is emitted as optical light. This process is believed responsible for supernovae of types Ib, Ic, and II. Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon–oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova. A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, is approximately −19.3, with a standard deviation of no more than 0.3. A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy. Super-Chandrasekhar mass supernovas In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf that had grown to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" may have been spinning so fast that a centrifugal tendency allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles. Since the observation of the Champagne Supernova in 2003, several more type Ia supernovae have been observed that are very bright, and thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if, and SN 2009dc. The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses. One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely. Tolman–Oppenheimer–Volkoff limit Stars sufficiently massive to pass the Chandrasekhar limit provided by electron degeneracy pressure do not become white dwarf stars. Instead they explode as supernovae. If the final mass is below the Tolman–Oppenheimer–Volkoff limit, then neutron degeneracy pressure contributes to the balance against gravity and the result will be a neutron star; but if the total mass is above the Tolman-Oppenheimer-Volkhoff limit, the result will be a black hole.
Physical sciences
Stellar astronomy
Astronomy
6818
https://en.wikipedia.org/wiki/Citric%20acid%20cycle
Citric acid cycle
The citric acid cycle—also known as the Krebs cycle, Szent–Györgyi–Krebs cycle, or TCA cycle (tricarboxylic acid cycle)—is a series of biochemical reactions to release the energy stored in nutrients through the oxidation of acetyl-CoA derived from carbohydrates, fats, proteins, and alcohol. The chemical energy released is available in the form of ATP. The Krebs cycle is used by organisms that respire (as opposed to organisms that ferment) to generate energy, either by anaerobic respiration or aerobic respiration. In addition, the cycle provides precursors of certain amino acids, as well as the reducing agent NADH, that are used in numerous other reactions. Its central importance to many biochemical pathways suggests that it was one of the earliest components of metabolism. Even though it is branded as a "cycle", it is not necessary for metabolites to follow only one specific route; at least three alternative segments of the citric acid cycle have been recognized. The name of this metabolic pathway is derived from the citric acid (a tricarboxylic acid, often called citrate, as the ionized form predominates at biological pH) that is consumed and then regenerated by this sequence of reactions to complete the cycle. The cycle consumes acetate (in the form of acetyl-CoA) and water, reduces NAD+ to NADH, releasing carbon dioxide. The NADH generated by the citric acid cycle is fed into the oxidative phosphorylation (electron transport) pathway. The net result of these two closely linked pathways is the oxidation of nutrients to produce usable chemical energy in the form of ATP. In eukaryotic cells, the citric acid cycle occurs in the matrix of the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion. For each pyruvate molecule (from glycolysis), the overall yield of energy-containing compounds from the citric acid cycle is three NADH, one FADH2, and one GTP. Discovery Several of the components and reactions of the citric acid cycle were established in the 1930s by the research of Albert Szent-Györgyi, who received the Nobel Prize in Physiology or Medicine in 1937 specifically for his discoveries pertaining to fumaric acid, a component of the cycle. He made this discovery by studying pigeon breast muscle. Because this tissue maintains its oxidative capacity well after breaking down in the Latapie mincer and releasing in aqueous solutions, breast muscle of the pigeon was very well qualified for the study of oxidative reactions. The citric acid cycle itself was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson while at the University of Sheffield, for which the former received the Nobel Prize for Physiology or Medicine in 1953, and for whom the cycle is sometimes named the "Krebs cycle". Overview The citric acid cycle is a metabolic pathway that connects carbohydrate, fat, and protein metabolism. The reactions of the cycle are carried out by eight enzymes that completely oxidize acetate (a two carbon molecule), in the form of acetyl-CoA, into two molecules each of carbon dioxide and water. Through catabolism of sugars, fats, and proteins, the two-carbon organic product acetyl-CoA is produced which enters the citric acid cycle. The reactions of the cycle also convert three equivalents of nicotinamide adenine dinucleotide (NAD+) into three equivalents of reduced NAD (NADH), one equivalent of flavin adenine dinucleotide (FAD) into one equivalent of FADH2, and one equivalent each of guanosine diphosphate (GDP) and inorganic phosphate (Pi) into one equivalent of guanosine triphosphate (GTP). The NADH and FADH2 generated by the citric acid cycle are, in turn, used by the oxidative phosphorylation pathway to generate energy-rich ATP. One of the primary sources of acetyl-CoA is from the breakdown of sugars by glycolysis which yield pyruvate that in turn is decarboxylated by the pyruvate dehydrogenase complex generating acetyl-CoA according to the following reaction scheme: The product of this reaction, acetyl-CoA, is the starting point for the citric acid cycle. Acetyl-CoA may also be obtained from the oxidation of fatty acids. Below is a schematic outline of the cycle: The citric acid cycle begins with the transfer of a two-carbon acetyl group from acetyl-CoA to the four-carbon acceptor compound (oxaloacetate) to form a six-carbon compound (citrate). The citrate then goes through a series of chemical transformations, losing two carboxyl groups as CO2. The carbons lost as CO2 originate from what was oxaloacetate, not directly from acetyl-CoA. The carbons donated by acetyl-CoA become part of the oxaloacetate carbon backbone after the first turn of the citric acid cycle. Loss of the acetyl-CoA-donated carbons as CO2 requires several turns of the citric acid cycle. However, because of the role of the citric acid cycle in anabolism, they might not be lost, since many citric acid cycle intermediates are also used as precursors for the biosynthesis of other molecules. Most of the electrons made available by the oxidative steps of the cycle are transferred to NAD+, forming NADH. For each acetyl group that enters the citric acid cycle, three molecules of NADH are produced. The citric acid cycle includes a series of redox reactions in mitochondria. In addition, electrons from the succinate oxidation step are transferred first to the FAD cofactor of succinate dehydrogenase, reducing it to FADH2, and eventually to ubiquinone (Q) in the mitochondrial membrane, reducing it to ubiquinol (QH2) which is a substrate of the electron transfer chain at the level of Complex III. For every NADH and FADH2 that are produced in the citric acid cycle, 2.5 and 1.5 ATP molecules are generated in oxidative phosphorylation, respectively. At the end of each cycle, the four-carbon oxaloacetate has been regenerated, and the cycle continues. Steps There are ten basic steps in the citric acid cycle, as outlined below. The cycle is continuously supplied with new carbon in the form of acetyl-CoA, entering at step 0 in the table. Two carbon atoms are oxidized to CO2, the energy from these reactions is transferred to other metabolic processes through GTP (or ATP), and as electrons in NADH and QH2. The NADH generated in the citric acid cycle may later be oxidized (donate its electrons) to drive ATP synthesis in a type of process called oxidative phosphorylation. FADH2 is covalently attached to succinate dehydrogenase, an enzyme which functions both in the citric acid cycle and the mitochondrial electron transport chain in oxidative phosphorylation. FADH2, therefore, facilitates transfer of electrons to coenzyme Q, which is the final electron acceptor of the reaction catalyzed by the succinate:ubiquinone oxidoreductase complex, also acting as an intermediate in the electron transport chain. Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix. The GTP that is formed by GDP-forming succinyl-CoA synthetase may be utilized by nucleoside-diphosphate kinase to form ATP (the catalyzed reaction is GTP + ADP → GDP + ATP). Products Products of the first turn of the cycle are one GTP (or ATP), three NADH, one FADH2 and two CO2. Because two acetyl-CoA molecules are produced from each glucose molecule, two cycles are required per glucose molecule. Therefore, at the end of two cycles, the products are: two GTP, six NADH, two FADH2, and four CO2. The above reactions are balanced if Pi represents the H2PO4− ion, ADP and GDP the ADP2− and GDP2− ions, respectively, and ATP and GTP the ATP3− and GTP3− ions, respectively. The total number of ATP molecules obtained after complete oxidation of one glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is estimated to be between 30 and 38. Efficiency The theoretical maximum yield of ATP through oxidation of one molecule of glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is 38 (assuming 3 molar equivalents of ATP per equivalent NADH and 2 ATP per FADH2). In eukaryotes, two equivalents of NADH and two equivalents of ATP are generated in glycolysis, which takes place in the cytoplasm. If transported using the glycerol phosphate shuttle rather than the malate–aspartate shuttle, transport of two of these equivalents of NADH into the mitochondria effectively consumes two equivalents of ATP, thus reducing the net production of ATP to 36. Furthermore, inefficiencies in oxidative phosphorylation due to leakage of protons across the mitochondrial membrane and slippage of the ATP synthase/proton pump commonly reduces the ATP yield from NADH and FADH2 to less than the theoretical maximum yield. The observed yields are, therefore, closer to ~2.5 ATP per NADH and ~1.5 ATP per FADH2, further reducing the total net production of ATP to approximately 30. An assessment of the total ATP yield with newly revised proton-to-ATP ratios provides an estimate of 29.85 ATP per glucose molecule. Variation While the citric acid cycle is in general highly conserved, there is significant variability in the enzymes found in different taxa (note that the diagrams on this page are specific to the mammalian pathway variant). Some differences exist between eukaryotes and prokaryotes. The conversion of D-threo-isocitrate to 2-oxoglutarate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.41, while prokaryotes employ the NADP+-dependent EC 1.1.1.42. Similarly, the conversion of (S)-malate to oxaloacetate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.37, while most prokaryotes utilize a quinone-dependent enzyme, EC 1.1.5.4. A step with significant variability is the conversion of succinyl-CoA to succinate. Most organisms utilize EC 6.2.1.5, succinate–CoA ligase (ADP-forming) (despite its name, the enzyme operates in the pathway in the direction of ATP formation). In mammals a GTP-forming enzyme, succinate–CoA ligase (GDP-forming) (EC 6.2.1.4) also operates. The level of utilization of each isoform is tissue dependent. In some acetate-producing bacteria, such as Acetobacter aceti, an entirely different enzyme catalyzes this conversion – EC 2.8.3.18, succinyl-CoA:acetate CoA-transferase. This specialized enzyme links the TCA cycle with acetate metabolism in these organisms. Some bacteria, such as Helicobacter pylori, employ yet another enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5). Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD+-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase. In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration. Regulation Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme such as NADH and ATP. The major eventual substrate of the cycle is ADP which gets converted to ATP. A reduced amount of ADP causes accumulation of precursor NADH which in turn can inhibit a number of enzymes. NADH, a product of all dehydrogenases in the citric acid cycle with the exception of succinate dehydrogenase, inhibits pyruvate dehydrogenase, isocitrate dehydrogenase, α-ketoglutarate dehydrogenase, and also citrate synthase. Acetyl-coA inhibits pyruvate dehydrogenase, while succinyl-CoA inhibits alpha-ketoglutarate dehydrogenase and citrate synthase. When tested in vitro with TCA enzymes, ATP inhibits citrate synthase and α-ketoglutarate dehydrogenase; however, ATP levels do not change more than 10% in vivo between rest and vigorous exercise. There is no known allosteric mechanism that can account for large changes in reaction rate from an allosteric effector whose concentration changes less than 10%. Citrate is used for feedback inhibition, as it inhibits phosphofructokinase, an enzyme involved in glycolysis that catalyses formation of fructose 1,6-bisphosphate, a precursor of pyruvate. This prevents a constant high rate of flux when there is an accumulation of citrate and a decrease in substrate for the enzyme. Regulation by calcium. Calcium is also used as a regulator in the citric acid cycle. Calcium levels in the mitochondrial matrix can reach up to the tens of micromolar levels during cellular activation. It activates pyruvate dehydrogenase phosphatase which in turn activates the pyruvate dehydrogenase complex. Calcium also activates isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. This increases the reaction rate of many of the steps in the cycle, and therefore increases flux throughout the pathway. Transcriptional regulation. There is a link between intermediates of the citric acid cycle and the regulation of hypoxia-inducible factors (HIF). HIF plays a role in the regulation of oxygen homeostasis, and is a transcription factor that targets angiogenesis, vascular remodeling, glucose utilization, iron transport and apoptosis. HIF is synthesized constitutively, and hydroxylation of at least one of two critical proline residues mediates their interaction with the von Hippel Lindau E3 ubiquitin ligase complex, which targets them for rapid degradation. This reaction is catalysed by prolyl 4-hydroxylases. Fumarate and succinate have been identified as potent inhibitors of prolyl hydroxylases, thus leading to the stabilisation of HIF. Major metabolic pathways converging on the citric acid cycle Several catabolic pathways converge on the citric acid cycle. Most of these reactions add intermediates to the citric acid cycle, and are therefore known as anaplerotic reactions, from the Greek meaning to "fill up". These increase the amount of acetyl CoA that the cycle is able to carry, increasing the mitochondrion's capability to carry out respiration if this is otherwise a limiting factor. Processes that remove intermediates from the cycle are termed "cataplerotic" reactions. In this section and in the next, the citric acid cycle intermediates are indicated in italics to distinguish them from other substrates and end-products. Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix. Here they can be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, as in the normal cycle. However, it is also possible for pyruvate to be carboxylated by pyruvate carboxylase to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity. In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate, and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell. Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO2 and water, with the energy thus released captured in the form of ATP. The three steps of beta-oxidation resemble the steps that occur in the production of oxaloacetate from succinate in the TCA cycle. Acyl-CoA is oxidized to trans-Enoyl-CoA while FAD is reduced to FADH2, which is similar to the oxidation of succinate to fumarate. Following, trans-enoyl-CoA is hydrated across the double bond to beta-hydroxyacyl-CoA, just like fumarate is hydrated to malate. Lastly, beta-hydroxyacyl-CoA is oxidized to beta-ketoacyl-CoA while NAD+ is reduced to NADH, which follows the same process as the oxidation of malate to oxaloacetate. In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis. In protein catabolism, proteins are broken down by proteases into their constituent amino acids. Their carbon skeletons (i.e. the de-aminated amino acids) may either enter the citric acid cycle as intermediates (e.g. alpha-ketoglutarate derived from glutamate or glutamine), having an anaplerotic effect on the cycle, or, in the case of leucine, isoleucine, lysine, phenylalanine, tryptophan, and tyrosine, they are converted into acetyl-CoA which can be burned to CO2 and water, or used to form ketone bodies, which too can only be burned in tissues other than the liver where they are formed, or excreted via the urine or breath. These latter amino acids are therefore termed "ketogenic" amino acids, whereas those that enter the citric acid cycle as intermediates can only be cataplerotically removed by entering the gluconeogenic pathway via malate which is transported out of the mitochondrion to be converted into cytosolic oxaloacetate and ultimately into glucose. These are the so-called "glucogenic" amino acids. De-aminated alanine, cysteine, glycine, serine, and threonine are converted to pyruvate and can consequently either enter the citric acid cycle as oxaloacetate (an anaplerotic reaction) or as acetyl-CoA to be disposed of as CO2 and water. In fat catabolism, triglycerides are hydrolyzed to break them into fatty acids and glycerol. In the liver the glycerol can be converted into glucose via dihydroxyacetone phosphate and glyceraldehyde-3-phosphate by way of gluconeogenesis. In skeletal muscle, glycerol is used in glycolysis by converting glycerol into glycerol-3-phosphate, then into dihydroxyacetone phosphate (DHAP), then into glyceraldehyde-3-phosphate. In many tissues, especially heart and skeletal muscle tissue, fatty acids are broken down through a process known as beta oxidation, which results in the production of mitochondrial acetyl-CoA, which can be used in the citric acid cycle. Beta oxidation of fatty acids with an odd number of methylene bridges produces propionyl-CoA, which is then converted into succinyl-CoA and fed into the citric acid cycle as an anaplerotic intermediate. The total energy gained from the complete breakdown of one (six-carbon) molecule of glucose by glycolysis, the formation of 2 acetyl-CoA molecules, their catabolism in the citric acid cycle, and oxidative phosphorylation equals about 30 ATP molecules, in eukaryotes. The number of ATP molecules derived from the beta oxidation of a 6 carbon segment of a fatty acid chain, and the subsequent oxidation of the resulting 3 molecules of acetyl-CoA is 40. Citric acid cycle intermediates serve as substrates for biosynthetic processes In this subheading, as in the previous one, the TCA intermediates are identified by italics. Several of the citric acid cycle intermediates are used for the synthesis of important compounds, which will have significant cataplerotic effects on the cycle. Acetyl-CoA cannot be transported out of the mitochondrion. To obtain cytosolic acetyl-CoA, citrate is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then converted back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA is used for fatty acid synthesis and the production of cholesterol. Cholesterol can, in turn, be used to synthesize the steroid hormones, bile salts, and vitamin D. The carbon skeletons of many non-essential amino acids are made from citric acid cycle intermediates. To turn them into amino acids the alpha keto-acids formed from the citric acid cycle intermediates have to acquire their amino groups from glutamate in a transamination reaction, in which pyridoxal phosphate is a cofactor. In this reaction the glutamate is converted into alpha-ketoglutarate, which is a citric acid cycle intermediate. The intermediates that can provide the carbon skeletons for amino acid synthesis are oxaloacetate which forms aspartate and asparagine; and alpha-ketoglutarate which forms glutamine, proline, and arginine. Of these amino acids, aspartate and glutamine are used, together with carbon and nitrogen atoms from other sources, to form the purines that are used as the bases in DNA and RNA, as well as in ATP, AMP, GTP, NAD, FAD and CoA. The pyrimidines are partly assembled from aspartate (derived from oxaloacetate). The pyrimidines, thymine, cytosine and uracil, form the complementary bases to the purine bases in DNA and RNA, and are also components of CTP, UMP, UDP and UTP. The majority of the carbon atoms in the porphyrins come from the citric acid cycle intermediate, succinyl-CoA. These molecules are an important component of the hemoproteins, such as hemoglobin, myoglobin and various cytochromes. During gluconeogenesis mitochondrial oxaloacetate is reduced to malate which is then transported out of the mitochondrion, to be oxidized back to oxaloacetate in the cytosol. Cytosolic oxaloacetate is then decarboxylated to phosphoenolpyruvate by phosphoenolpyruvate carboxykinase, which is the rate limiting step in the conversion of nearly all the gluconeogenic precursors (such as the glucogenic amino acids and lactate) into glucose by the liver and kidney. Because the citric acid cycle is involved in both catabolic and anabolic processes, it is known as an amphibolic pathway. Evan M.W.Duo Glucose feeds the TCA cycle via circulating lactate The metabolic role of lactate is well recognized as a fuel for tissues, mitochondrial cytopathies such as DPH Cytopathy, and the scientific field of oncology (tumors). In the classical Cori cycle, muscles produce lactate which is then taken up by the liver for gluconeogenesis. New studies suggest that lactate can be used as a source of carbon for the TCA cycle. Evolution It is believed that components of the citric acid cycle were derived from anaerobic bacteria, and that the TCA cycle itself may have evolved more than once. It may even predate biosis: the substrates appear to undergo most of the reactions spontaneously in the presence of persulfate radicals. Theoretically, several alternatives to the TCA cycle exist; however, the TCA cycle appears to be the most efficient. If several TCA alternatives had evolved independently, they all appear to have converged to the TCA cycle.
Biology and health sciences
Metabolic processes
Biology
6829
https://en.wikipedia.org/wiki/Cache%20%28computing%29
Cache (computing)
In computing, a cache ( ) is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs. To be cost-effective, caches must be relatively small. Nevertheless, caches are effective in many areas of computing because typical computer applications access data with a high degree of locality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested, and spatial locality, where data is requested that is stored near data that has already been requested. Motivation In memory design, there is an inherent trade-off between capacity and speed because larger capacity implies larger size and thus greater physical distances for signals to travel causing propagation delays. There is also a tradeoff between high-performance technologies such as SRAM and cheaper, easily mass-produced commodities such as DRAM, flash, or hard disks. The buffering provided by a cache benefits one or both of latency and throughput (bandwidth). A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading large chunks into the cache, in the hope that subsequent reads will be from nearby locations and can be read from the cache. Prediction or explicit prefetching can be used to guess where future reads will come from and make requests ahead of time; if done optimally, the latency is bypassed altogether. The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine-grain transfers into larger, more efficient requests. In the case of DRAM circuits, the additional throughput may be gained by using a wider data bus. Operation Hardware implements cache as a block of memory for temporary storage of data likely to be used again. Central processing units (CPUs), solid-state drives (SSDs) and hard disk drives (HDDs) frequently include hardware-based cache, while web browsers and web servers commonly rely on software caching. A cache is made up of a pool of entries. Each entry has associated data, which is a copy of the same data in some backing store. Each entry also has a tag, which specifies the identity of the data in the backing store of which the entry is a copy. When the cache client (a CPU, web browser, operating system) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a cache hit. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache. The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as a cache miss. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access. During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. The heuristic used to select the entry to replace is known as the replacement policy. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries. Write policies Cache writes must eventually be propagated to the backing store. The timing for this is governed by the write policy. The two primary write policies are: Write-through: Writes are performed synchronously to both the cache and the backing store. Write-back: Initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block. A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as a lazy write. For this reason, a read miss in a write-back cache may require two memory accesses to the backing store: one to write back the dirty data, and one to retrieve the requested data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. Write operations do not return data. Consequently, a decision needs to be made for write misses: whether or not to load the data into the cache. This is determined by these write-miss policies: Write allocate (also called fetch on write): Data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses. No-write allocate (also called write-no-allocate or write around): Data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only. While both write policies can Implement either write-miss policy, they are typically paired as follows: A write-back cache typically employs write allocate, anticipating that subsequent writes or reads to the same location will benefit from having the data already in the cache. A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store. Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated with cache coherence. Prefetch On a cache read miss, caches with a demand paging policy read the minimum amount from the backing store. A typical demand-paging virtual memory implementation reads one page of virtual memory (often 4 KB) from disk into the disk cache in RAM. A typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1 cache line of 64 bytes from the L2 cache into the L1 cache. Caches with a prefetch input queue or more general anticipatory paging policy go further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such as disk storage and DRAM. A few operating systems go further with a loader that always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the page cache associated with a prefetcher or the web cache associated with link prefetching. Examples of hardware caches CPU cache Small memories on or close to the CPU can operate faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, sometimes in cascaded levels; modern high-end embedded, desktop and server microprocessors may have as many as six types of cache (between levels and functions). Some examples of caches with a specific function are the D-cache, I-cache and the translation lookaside buffer for the memory management unit (MMU). GPU cache Earlier graphics processing units (GPUs) often had limited read-only texture caches and used swizzling to improve 2D locality of reference. Cache misses would drastically affect performance, e.g. if mipmapping was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel. As GPUs advanced, supporting general-purpose computing on graphics processing units and compute kernels, they have developed progressively larger and increasingly general caches, including instruction caches for shaders, exhibiting functionality commonly found in CPU caches. These caches have grown to handle synchronization primitives between threads and atomic operations, and interface with a CPU-style MMU. DSPs Digital signal processors have similarly generalized over the years. Earlier designs used scratchpad memory fed by direct memory access, but modern DSPs such as Qualcomm Hexagon often include a very similar set of caches to a CPU (e.g. Modified Harvard architecture with shared L2, split L1 I-cache and D-cache). Translation lookaside buffer A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results of virtual address to physical address translations. This specialized cache is called a translation lookaside buffer (TLB). In-network cache Information-centric networking Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information. Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements for caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions. Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes impose different requirements on the content eviction policies. In particular, eviction policies for ICN should be fast and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed. Policies Time aware least recently used The time aware least recently used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid lifetime. The algorithm is suitable in network cache applications, such as ICN, content delivery networks (CDNs) and distributed networks in general. TLRU introduces a new term: time to use (TTU). TTU is a time stamp on content which stipulates the usability time for the content based on the locality of the content and information from the content publisher. Owing to this locality-based time stamp, TTU provides more control to the local administrator to regulate in-network storage. In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally-defined function. Once the local TTU value is calculated the replacement of content is performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and short-lived content should be replaced with incoming content. Least frequent recently used The least frequent recently used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for network cache applications, such as ICN, CDNs and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and unprivileged partitions. The privileged partition can be seen as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done by first evicting content from the unprivileged partition, then pushing content from the privileged partition to the unprivileged partition, and finally inserting new content into the privileged partition. In the above procedure, the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition. The basic idea is to cache the locally popular content with the ALFU scheme and push the popular content to the privileged partition. Weather forecast In 2011, the use of smartphones with weather forecasting options was overly taxing AccuWeather servers; two requests from the same area would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from a nearby query would be used. The number of to-the-server lookups per day dropped by half. Software caches Disk cache While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The page cache in main memory is managed by the operating system kernel. While the disk buffer, which is an integrated part of the hard disk drive or solid state drive, is sometimes misleadingly referred to as disk cache, its main functions are write sequencing and read prefetching. High-end disk controllers often have their own on-board cache for the hard disk drive's data blocks. Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or local tape drives or optical jukeboxes; such a scheme is the main concept of hierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together as hybrid drives. Web cache Web browsers and web proxy servers employ web caches to store previous responses from web servers, such as web pages and images. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web. Web browsers employ a built-in web cache, but some Internet service providers (ISPs) or organizations also use a caching proxy server, which is a web cache that is shared among all users of that network. Another form of cache is P2P caching, where the files most sought for by peer-to-peer applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli. Memoization A cache can store data that is computed on demand rather than retrieved from a backing store. Memoization is an optimization technique that stores the results of resource-consuming function calls within a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to the dynamic programming algorithm design methodology, which can also be thought of as a means of caching. Content delivery network A content delivery network (CDN) is a network of distributed servers that deliver pages and other Web content to a user, based on the geographic locations of the user, the origin of the web page and the content delivery server. CDNs began in the late 1990s as a way to speed up the delivery of static content, such as HTML pages, images and videos. By replicating content on multiple servers around the world and delivering it to users based on their location, CDNs can significantly improve the speed and availability of a website or application. When a user requests a piece of content, the CDN will check to see if it has a copy of the content in its cache. If it does, the CDN will deliver the content to the user from the cache. Cloud storage gateway A cloud storage gateway, also known as an edge filer, is a hybrid cloud storage device that connects a local network to one or more cloud storage services, typically object storage services such as Amazon S3. It provides a cache for frequently accessed data, providing high speed local access to frequently accessed data in the cloud storage service. Cloud storage gateways also provide additional benefits such as accessing cloud object storage through traditional file serving protocols as well as continued access to cached data during connectivity outages. Other caches The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library. Write-through operation is common when operating over unreliable networks (like an Ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side network file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable. Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a "Cached" link next to each search result. This can prove useful when web pages from a web server are temporarily or permanently inaccessible. Database caching can substantially improve the throughput of database applications, for example in the processing of indexes, data dictionaries, and frequently used subsets of data. A distributed cache uses networked hosts to provide scalability, reliability and performance to the application. The hosts can be co-located or spread over different geographical regions. Buffer vs. cache The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering. Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system. With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand, reduces the number of transfers for otherwise novel data amongst communicating processes, which amortizes overhead involved for several small transfers over fewer, larger transfers, provides an intermediary for communicating processes which are incapable of direct transfers amongst each other, or ensures a minimum data size or representation required by at least one of the communicating processes involved in a transfer. With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching. A buffer is a temporary memory location that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once. A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible from the perspective of neighboring layers.
Technology
Computer architecture concepts
null
6854
https://en.wikipedia.org/wiki/Church%E2%80%93Turing%20thesis
Church–Turing thesis
In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability: In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections. In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus. Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers. Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below). On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined. Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below). Statement in Church's and Turing's words addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result". In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same: We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions. The thesis can be stated as: Every effectively calculable function is a computable function. Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way: It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.] History One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis")? Circa 1930–1952 In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–1935), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that: But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically". Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form. Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system". A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating: Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church. Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms. Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately". In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses. Rosser (1939) formally identified the three notions-as-definitions: Kleene proposes Thesis I: This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "Thesis I": The Church–Turing Thesis: Stephen Kleene, in Introduction To Metamathematics, finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability. Kleene having switched from presenting his work in the terminology of Church-Kleene lambda definability, to that of Gödel-Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the Intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX". Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone. Later developments An attempt to better understand the notion of "effective computability" led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy". His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable." In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to: "(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize. "(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in. "(L.1) (Locality) A computor can change only elements of an observed symbolic configuration. "(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration. "(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state." The matter remains in active discussion within the academic community. The thesis as a definition The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function. Success of the thesis Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function." All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1": Informal usage in proofs Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive). Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis: In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive. Variations The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable." The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine. A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine. If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation." Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "Though [...] Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation. Philosophical implications Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings: The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation. There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept. Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory. Non-computable functions One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method. Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community.
Mathematics
Computability theory
null
6857
https://en.wikipedia.org/wiki/Computer%20multitasking
Computer multitasking
In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking). Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs. Multitasking is a common feature of computer operating systems since at least the 1960s. It allows more efficient use of the computer hardware; when a program is waiting for some external event such as a user input or an input/output transfer with a peripheral to complete, the central processor can still be used with another program. In a time-sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface. Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program. A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors. The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian. Multiprogramming In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient. Multiprogramming is a computing technique that enables multiple programs to be concurrently loaded and executed into a computer's memory, allowing the CPU to switch between them swiftly. This optimizes CPU utilization by keeping it engaged with the execution of tasks, particularly useful when one program is waiting for I/O operations to complete. The Bull Gamma 60, initially designed in 1957 and first released in 1960, was the first computer designed with multiprogramming in mind. Its architecture featured a central memory and a Program Distributor feeding up to twenty-five autonomous processing units with code and data, and allowing concurrent operation of multiple clusters. Another such computer was the LEO III, first released in 1961. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent. Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed. Cooperative multitasking Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems. As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile. Preemptive multitasking Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and Multics in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives, as well as modern versions of Windows. At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. Possibly the earliest preemptive multitasking OS available to home users was Microware's OS-9, available for computers based on the Motorola 6809 such as the TRS-80 Color Computer 2, with the operating system supplied by Tandy as an upgrade for disk-equipped systems. Sinclair QDOS on the Sinclair QL followed in 1984, but it was not a big success. Commodore's Amiga was released the following year, offering a combination of multitasking and multimedia capabilities. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95. In 1988 Apple offered A/UX as a UNIX System V-based alternative to the Classic Mac OS. In 2001 Apple switched to the NeXTSTEP-influenced Mac OS X. A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer support legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications. Real time Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time. Multithreading As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e. g., one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data. Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as lightweight processes because switching between threads does not involve changing the memory context. While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors. Some systems directly support multithreading in hardware. Memory protection Essential to any multitasking system is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security. In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and the specific error in question, the user may receive an access violation error message such as "segmentation fault". In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL. Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software. Memory swapping Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage. Programming Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource. Bigger systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multiprocessing. Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.
Technology
Operating systems
null
6868
https://en.wikipedia.org/wiki/Caffeine
Caffeine
Caffeine is a central nervous system (CNS) stimulant of the methylxanthine class and is the most commonly consumed psychoactive substance globally. It is mainly used for its eugeroic (wakefulness promoting), ergogenic (physical performance-enhancing), or nootropic (cognitive-enhancing) properties. Caffeine acts by blocking binding of adenosine at a number of adenosine receptor types, inhibiting the centrally depressant effects of adenosine and enhancing the release of acetylcholine. Caffeine has a three-dimensional structure similar to that of adenosine, which allows it to bind and block its receptors. Caffeine also increases cyclic AMP levels through nonselective inhibition of phosphodiesterase, increases calcium release from intracellular stores, and antagonizes GABA receptors, although these mechanisms typically occur at concentrations beyond usual human consumption. Caffeine is a bitter, white crystalline purine, a methylxanthine alkaloid, and is chemically related to the adenine and guanine bases of deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). It is found in the seeds, fruits, nuts, or leaves of a number of plants native to Africa, East Asia and South America and helps to protect them against herbivores and from competition by preventing the germination of nearby seeds, as well as encouraging consumption by select animals such as honey bees. The best-known source of caffeine is the coffee bean, the seed of the Coffea plant. People may drink beverages containing caffeine to relieve or prevent drowsiness and to improve cognitive performance. To make these drinks, caffeine is extracted by steeping the plant product in water, a process called infusion. Caffeine-containing drinks, such as coffee, tea, and cola, are consumed globally in high volumes. In 2020, almost 10 million tonnes of coffee beans were consumed globally. Caffeine is the world's most widely consumed psychoactive drug. Unlike most other psychoactive substances, caffeine remains largely unregulated and legal in nearly all parts of the world. Caffeine is also an outlier as its use is seen as socially acceptable in most cultures with it even being encouraged. Caffeine has both positive and negative health effects. It can treat and prevent the premature infant breathing disorders bronchopulmonary dysplasia of prematurity and apnea of prematurity. Caffeine citrate is on the WHO Model List of Essential Medicines. It may confer a modest protective effect against some diseases, including Parkinson's disease. Some people experience sleep disruption or anxiety if they consume caffeine, but others show little disturbance. Evidence of a risk during pregnancy is equivocal; some authorities recommend that pregnant women limit caffeine to the equivalent of two cups of coffee per day or less. Caffeine can produce a mild form of drug dependence – associated with withdrawal symptoms such as sleepiness, headache, and irritability – when an individual stops using caffeine after repeated daily intake. Tolerance to the autonomic effects of increased blood pressure and heart rate, and increased urine output, develops with chronic use (i.e., these symptoms become less pronounced or do not occur following consistent use). Caffeine is classified by the U.S. Food and Drug Administration (FDA) as generally recognized as safe. Toxic doses, over 10 grams per day for an adult, are much higher than the typical dose of under 500 milligrams per day. The European Food Safety Authority reported that up to 400 mg of caffeine per day (around 5.7 mg/kg of body mass per day) does not raise safety concerns for non-pregnant adults, while intakes up to 200 mg per day for pregnant and lactating women do not raise safety concerns for the fetus or the breast-fed infants. A cup of coffee contains 80–175 mg of caffeine, depending on what "bean" (seed) is used, how it is roasted, and how it is prepared (e.g., drip, percolation, or espresso). Thus it requires roughly 50–100 ordinary cups of coffee to reach the toxic dose. However, pure powdered caffeine, which is available as a dietary supplement, can be lethal in tablespoon-sized amounts. Uses Medical Caffeine is used for both prevention and treatment of bronchopulmonary dysplasia in premature infants. It may improve weight gain during therapy and reduce the incidence of cerebral palsy as well as reduce language and cognitive delay. On the other hand, subtle long-term side effects are possible. Caffeine is used as a primary treatment for apnea of prematurity, but not prevention. It is also used for orthostatic hypotension treatment. Some people use caffeine-containing beverages such as coffee or tea to try to treat their asthma. Evidence to support this practice is poor. It appears that caffeine in low doses improves airway function in people with asthma, increasing forced expiratory volume (FEV1) by 5% to 18% for up to four hours. The addition of caffeine (100–130 mg) to commonly prescribed pain relievers such as paracetamol or ibuprofen modestly improves the proportion of people who achieve pain relief. Consumption of caffeine after abdominal surgery shortens the time to recovery of normal bowel function and shortens length of hospital stay. Caffeine was formerly used as a second-line treatment for ADHD. It is considered less effective than methylphenidate or amphetamine but more so than placebo for children with ADHD. Children, adolescents, and adults with ADHD are more likely to consume caffeine, perhaps as a form of self-medication. Enhancing performance Cognitive performance Caffeine is a central nervous system stimulant that may reduce fatigue and drowsiness. At normal doses, caffeine has variable effects on learning and memory, but it generally improves reaction time, wakefulness, concentration, and motor coordination. The amount of caffeine needed to produce these effects varies from person to person, depending on body size and degree of tolerance. The desired effects arise approximately one hour after consumption, and the desired effects of a moderate dose usually subside after about three or four hours. Caffeine can delay or prevent sleep and improves task performance during sleep deprivation. Shift workers who use caffeine make fewer mistakes that could result from drowsiness. Caffeine in a dose dependent manner increases alertness in both fatigued and normal individuals. A systematic review and meta-analysis from 2014 found that concurrent caffeine and -theanine use has synergistic psychoactive effects that promote alertness, attention, and task switching; these effects are most pronounced during the first hour post-dose. Physical performance Caffeine is a proven ergogenic aid in humans. Caffeine improves athletic performance in aerobic (especially endurance sports) and anaerobic conditions. Moderate doses of caffeine (around 5 mg/kg) can improve sprint performance, cycling and running time trial performance, endurance (i.e., it delays the onset of muscle fatigue and central fatigue), and cycling power output. Caffeine increases basal metabolic rate in adults. Caffeine ingestion prior to aerobic exercise increases fat oxidation, particularly in persons with low physical fitness. Caffeine improves muscular strength and power, and may enhance muscular endurance. Caffeine also enhances performance on anaerobic tests. Caffeine consumption before constant load exercise is associated with reduced perceived exertion. While this effect is not present during exercise-to-exhaustion exercise, performance is significantly enhanced. This is congruent with caffeine reducing perceived exertion, because exercise-to-exhaustion should end at the same point of fatigue. Caffeine also improves power output and reduces time to completion in aerobic time trials, an effect positively (but not exclusively) associated with longer duration exercise. Specific populations Adults For the general population of healthy adults, Health Canada advises a daily intake of no more than 400 mg. This limit was found to be safe by a 2017 systematic review on caffeine toxicology. Children In healthy children, moderate caffeine intake under 400 mg produces effects that are "modest and typically innocuous". As early as six months old, infants can metabolize caffeine at the same rate as that of adults. Higher doses of caffeine (>400 mg) can cause physiological, psychological and behavioral harm, particularly for children with psychiatric or cardiac conditions. There is no evidence that coffee stunts a child's growth. The American Academy of Pediatrics recommends that caffeine consumption, particularly in the case of energy and sports drinks, is not appropriate for children and adolescents and should be avoided. This recommendation is based on a clinical report released by American Academy of Pediatrics in 2011 with a review of 45 publications from 1994 to 2011 and includes inputs from various stakeholders (Pediatricians, Committee on nutrition, Canadian Pediatric Society, Centers for Disease Control & Prevention, Food and Drug Administration, Sports Medicine & Fitness committee, National Federations of High School Associations). For children age 12 and under, Health Canada recommends a maximum daily caffeine intake of no more than 2.5 milligrams per kilogram of body weight. Based on average body weights of children, this translates to the following age-based intake limits: Adolescents Health Canada has not developed advice for adolescents because of insufficient data. However, they suggest that daily caffeine intake for this age group be no more than 2.5 mg/kg body weight. This is because the maximum adult caffeine dose may not be appropriate for light-weight adolescents or for younger adolescents who are still growing. The daily dose of 2.5 mg/kg body weight would not cause adverse health effects in the majority of adolescent caffeine consumers. This is a conservative suggestion since older and heavier-weight adolescents may be able to consume adult doses of caffeine without experiencing adverse effects. Pregnancy and breastfeeding The metabolism of caffeine is reduced in pregnancy, especially in the third trimester, and the half-life of caffeine during pregnancy can be increased up to 15 hours (as compared to 2.5 to 4.5 hours in non-pregnant adults). Evidence regarding the effects of caffeine on pregnancy and for breastfeeding are inconclusive. There is limited primary and secondary advice for, or against, caffeine use during pregnancy and its effects on the fetus or newborn. The UK Food Standards Agency has recommended that pregnant women should limit their caffeine intake, out of prudence, to less than 200 mg of caffeine a day – the equivalent of two cups of instant coffee, or one and a half to two cups of fresh coffee. The American Congress of Obstetricians and Gynecologists (ACOG) concluded in 2010 that caffeine consumption is safe up to 200 mg per day in pregnant women. For women who breastfeed, are pregnant, or may become pregnant, Health Canada recommends a maximum daily caffeine intake of no more than 300 mg, or a little over two 8 oz (237 mL) cups of coffee. A 2017 systematic review on caffeine toxicology found evidence supporting that caffeine consumption up to 300 mg/day for pregnant women is generally not associated with adverse reproductive or developmental effect. There are conflicting reports in the scientific literature about caffeine use during pregnancy. A 2011 review found that caffeine during pregnancy does not appear to increase the risk of congenital malformations, miscarriage or growth retardation even when consumed in moderate to high amounts. Other reviews, however, concluded that there is some evidence that higher caffeine intake by pregnant women may be associated with a higher risk of giving birth to a low birth weight baby, and may be associated with a higher risk of pregnancy loss. A systematic review, analyzing the results of observational studies, suggests that women who consume large amounts of caffeine (greater than 300 mg/day) prior to becoming pregnant may have a higher risk of experiencing pregnancy loss. Adverse effects Physiological Caffeine in coffee and other caffeinated drinks can affect gastrointestinal motility and gastric acid secretion. In postmenopausal women, high caffeine consumption can accelerate bone loss. Caffeine, alongside other factors such as stress and fatigue, can also increase the pressure in various muscles, including the eyelids. Acute ingestion of caffeine in large doses (at least 250–300 mg, equivalent to the amount found in 2–3 cups of coffee or 5–8 cups of tea) results in a short-term stimulation of urine output in individuals who have been deprived of caffeine for a period of days or weeks. This increase is due to both a diuresis (increase in water excretion) and a natriuresis (increase in saline excretion); it is mediated via proximal tubular adenosine receptor blockade. The acute increase in urinary output may increase the risk of dehydration. However, chronic users of caffeine develop a tolerance to this effect and experience no increase in urinary output. Psychological Minor undesired symptoms from caffeine ingestion not sufficiently severe to warrant a psychiatric diagnosis are common and include mild anxiety, jitteriness, insomnia, increased sleep latency, and reduced coordination. Caffeine can have negative effects on anxiety disorders. According to a 2011 literature review, caffeine use may induce anxiety and panic disorders in people with Parkinson's disease. At high doses, typically greater than 300 mg, caffeine can both cause and worsen anxiety. For some people, discontinuing caffeine use can significantly reduce anxiety. In moderate doses, caffeine has been associated with reduced symptoms of depression and lower suicide risk. Two reviews indicate that increased consumption of coffee and caffeine may reduce the risk of depression. Some textbooks state that caffeine is a mild euphoriant, while others state that it is not a euphoriant. Caffeine-induced anxiety disorder is a subclass of the DSM-5 diagnosis of substance/medication-induced anxiety disorder. Reinforcement disorders Addiction Whether caffeine can result in an addictive disorder depends on how addiction is defined. Compulsive caffeine consumption under any circumstances has not been observed, and caffeine is therefore not generally considered addictive. However, some diagnostic models, such as the and ICD-10, include a classification of caffeine addiction under a broader diagnostic model. Some state that certain users can become addicted and therefore unable to decrease use even though they know there are negative health effects. Caffeine does not appear to be a reinforcing stimulus, and some degree of aversion may actually occur, with people preferring placebo over caffeine in a study on drug abuse liability published in an NIDA research monograph. Some state that research does not provide support for an underlying biochemical mechanism for caffeine addiction. Other research states it can affect the reward system. "Caffeine addiction" was added to the ICDM-9 and ICD-10. However, its addition was contested with claims that this diagnostic model of caffeine addiction is not supported by evidence. The American Psychiatric Association's does not include the diagnosis of a caffeine addiction but proposes criteria for the disorder for more study. Dependence and withdrawal Withdrawal can cause mild to clinically significant distress or impairment in daily functioning. The frequency at which this occurs is self-reported at 11%, but in lab tests only half of the people who report withdrawal actually experience it, casting doubt on many claims of dependence. and most cases of caffeine withdrawal were 13% in the moderate sense. Moderately physical dependence and withdrawal symptoms may occur upon abstinence, with greater than 100 mg caffeine per day, although these symptoms last no longer than a day. Some symptoms associated with psychological dependence may also occur during withdrawal. The diagnostic criteria for caffeine withdrawal require a previous prolonged daily use of caffeine. Following 24 hours of a marked reduction in consumption, a minimum of 3 of these signs or symptoms is required to meet withdrawal criteria: difficulty concentrating, depressed mood/irritability, flu-like symptoms, headache, and fatigue. Additionally, the signs and symptoms must disrupt important areas of functioning and are not associated with effects of another condition. The ICD-11 includes caffeine dependence as a distinct diagnostic category, which closely mirrors the DSM-5's proposed set of criteria for "caffeine-use disorder".  Caffeine use disorder refers to dependence on caffeine characterized by failure to control caffeine consumption despite negative physiological consequences. The APA, which published the DSM-5, acknowledged that there was sufficient evidence in order to create a diagnostic model of caffeine dependence for the DSM-5, but they noted that the clinical significance of the disorder is unclear. Due to this inconclusive evidence on clinical significance, the DSM-5 classifies caffeine-use disorder as a "condition for further study". Tolerance to the effects of caffeine occurs for caffeine-induced elevations in blood pressure and the subjective feelings of nervousness. Sensitization, the process whereby effects become more prominent with use, may occur for positive effects such as feelings of alertness and wellbeing. Tolerance varies for daily, regular caffeine users and high caffeine users. High doses of caffeine (750 to 1200 mg/day spread throughout the day) have been shown to produce complete tolerance to some, but not all of the effects of caffeine. Doses as low as 100 mg/day, such as a cup of coffee or two to three servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not. Risk of other diseases A neuroprotective effect of caffeine against Alzheimer's disease and dementia is possible but the evidence is inconclusive. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude. One meta analysis has found that caffeine consumption is associated with a reduced risk of type 2 diabetes. Regular caffeine consumption may reduce the risk of developing Parkinson's disease and may slow the progression of Parkinson's disease. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. The DSM-5 also includes other caffeine-induced disorders consisting of caffeine-induced anxiety disorder, caffeine-induced sleep disorder and unspecified caffeine-related disorders. The first two disorders are classified under "Anxiety Disorder" and "Sleep-Wake Disorder" because they share similar characteristics. Other disorders that present with significant distress and impairment of daily functioning that warrant clinical attention but do not meet the criteria to be diagnosed under any specific disorders are listed under "Unspecified Caffeine-Related Disorders". Energy crash Caffeine is reputed to cause a fall in energy several hours after drinking, but this is not well researched. Overdose Consumption of per day is associated with a condition known as caffeinism. Caffeinism usually combines caffeine dependency with a wide range of unpleasant symptoms including nervousness, irritability, restlessness, insomnia, headaches, and palpitations after caffeine use. Caffeine overdose can result in a state of central nervous system overstimulation known as caffeine intoxication, a clinically significant temporary condition that develops during, or shortly after, the consumption of caffeine. This syndrome typically occurs only after ingestion of large amounts of caffeine, well over the amounts found in typical caffeinated beverages and caffeine tablets (e.g., more than 400–500 mg at a time). According to the DSM-5, caffeine intoxication may be diagnosed if five (or more) of the following symptoms develop after recent consumption of caffeine: restlessness, nervousness, excitement, insomnia, flushed face, diuresis, gastrointestinal disturbance, muscle twitching, rambling flow of thought and speech, tachycardia or cardiac arrhythmia, periods of inexhaustibility, and psychomotor agitation. According to the International Classification of Diseases (ICD-11), cases of very high caffeine intake (e.g. > 5 g) may result in caffeine intoxication with symptoms including mania, depression, lapses in judgment, disorientation, disinhibition, delusions, hallucinations or psychosis, and rhabdomyolysis. Energy drinks High caffeine consumption in energy drinks (at least one liter or 320 mg of caffeine) was associated with short-term cardiovascular side effects including hypertension, prolonged QT interval, and heart palpitations. These cardiovascular side effects were not seen with smaller amounts of caffeine consumption in energy drinks (less than 200 mg). Severe intoxication there is no known antidote or reversal agent for caffeine intoxication. Treatment of mild caffeine intoxication is directed toward symptom relief; severe intoxication may require peritoneal dialysis, hemodialysis, or hemofiltration. Intralipid infusion therapy is indicated in cases of imminent risk of cardiac arrest in order to scavenge the free serum caffeine. Lethal dose Death from caffeine ingestion appears to be rare, and most commonly caused by an intentional overdose of medications. In 2016, 3702 caffeine-related exposures were reported to Poison Control Centers in the United States, of which 846 required treatment at a medical facility, and 16 had a major outcome; and several caffeine-related deaths are reported in case studies. The LD50 of caffeine in rats is 192 milligrams per kilogram of body mass. The fatal dose in humans is estimated to be 150–200 milligrams per kilogram, which is 10.5–14 grams for a typical adult, equivalent to about 75–100 cups of coffee. There are cases where doses as low as 57 milligrams per kilogram have been fatal. A number of fatalities have been caused by overdoses of readily available powdered caffeine supplements, for which the estimated lethal amount is less than a tablespoon. The lethal dose is lower in individuals whose ability to metabolize caffeine is impaired due to genetics or chronic liver disease. A death was reported in 2013 of a man with liver cirrhosis who overdosed on caffeinated mints. Interactions Caffeine is a substrate for CYP1A2, and interacts with many substances through this and other mechanisms. Alcohol According to DSST, alcohol causes a decrease in performance on their standardized tests, and caffeine causes a significant improvement. When alcohol and caffeine are consumed jointly, the effects of the caffeine are changed, but the alcohol effects remain the same. For example, consuming additional caffeine does not reduce the effect of alcohol. However, the jitteriness and alertness given by caffeine is decreased when additional alcohol is consumed. Alcohol consumption alone reduces both inhibitory and activational aspects of behavioral control. Caffeine antagonizes the effect of alcohol on the activational aspect of behavioral control, but has no effect on the inhibitory behavioral control. The Dietary Guidelines for Americans recommend avoidance of concomitant consumption of alcohol and caffeine, as taking them together may lead to increased alcohol consumption, with a higher risk of alcohol-associated injury. Smoking Smoking tobacco has been shown to increase caffeine clearance by 56% as a result of polycyclic aromatic hydrocarbons inducing the CYP1A2 enzyme. The CYP1A2 enzyme that is induced by smoking is responsible for the metabolism of caffeine; increased enzyme activity leads to increased caffeine clearance, and is associated with greater coffee consumption for regular smokers. Birth control Birth control pills can extend the half-life of caffeine by as much as 40%, requiring greater attention to caffeine consumption. Medications Caffeine sometimes increases the effectiveness of some medications, such as those for headaches. Caffeine was determined to increase the potency of some over-the-counter analgesic medications by 40%. The pharmacological effects of adenosine may be blunted in individuals taking large quantities of methylxanthines like caffeine. Some other examples of methylxanthines include the medications theophylline and aminophylline, which are prescribed to relieve symptoms of asthma or COPD. Pharmacology Pharmacodynamics In the absence of caffeine and when a person is awake and alert, little adenosine is present in CNS neurons. With a continued wakeful state, over time adenosine accumulates in the neuronal synapse, in turn binding to and activating adenosine receptors found on certain CNS neurons; when activated, these receptors produce a cellular response that ultimately increases drowsiness. When caffeine is consumed, it antagonizes adenosine receptors; in other words, caffeine prevents adenosine from activating the receptor by blocking the location on the receptor where adenosine binds to it. As a result, caffeine temporarily prevents or relieves drowsiness, and thus maintains or restores alertness. Receptor and ion channel targets Caffeine is an antagonist of adenosine A2A receptors, and knockout mouse studies have specifically implicated antagonism of the A2A receptor as responsible for the wakefulness-promoting effects of caffeine. Antagonism of A2A receptors in the ventrolateral preoptic area (VLPO) reduces inhibitory GABA neurotransmission to the tuberomammillary nucleus, a histaminergic projection nucleus that activation-dependently promotes arousal. This disinhibition of the tuberomammillary nucleus is the downstream mechanism by which caffeine produces wakefulness-promoting effects. Caffeine is an antagonist of all four adenosine receptor subtypes (A1, A2A, A2B, and A3), although with varying potencies. The affinity (KD) values of caffeine for the human adenosine receptors are 12 μM at A1, 2.4 μM at A2A, 13 μM at A2B, and 80 μM at A3. Antagonism of adenosine receptors by caffeine also stimulates the medullary vagal, vasomotor, and respiratory centers, which increases respiratory rate, reduces heart rate, and constricts blood vessels. Adenosine receptor antagonism also promotes neurotransmitter release (e.g., monoamines and acetylcholine), which endows caffeine with its stimulant effects; adenosine acts as an inhibitory neurotransmitter that suppresses activity in the central nervous system. Heart palpitations are caused by blockade of the A1 receptor. Because caffeine is both water- and lipid-soluble, it readily crosses the blood–brain barrier that separates the bloodstream from the interior of the brain. Once in the brain, the principal mode of action is as a nonselective antagonist of adenosine receptors (in other words, an agent that reduces the effects of adenosine). The caffeine molecule is structurally similar to adenosine, and is capable of binding to adenosine receptors on the surface of cells without activating them, thereby acting as a competitive antagonist. In addition to its activity at adenosine receptors, caffeine is an inositol trisphosphate receptor 1 antagonist and a voltage-independent activator of the ryanodine receptors (RYR1, RYR2, and RYR3). It is also a competitive antagonist of the ionotropic glycine receptor. Effects on striatal dopamine While caffeine does not directly bind to any dopamine receptors, it influences the binding activity of dopamine at its receptors in the striatum by binding to adenosine receptors that have formed GPCR heteromers with dopamine receptors, specifically the A1–D1 receptor heterodimer (this is a receptor complex with one adenosine A1 receptor and one dopamine D1 receptor) and the A2A–D2 receptor heterotetramer (this is a receptor complex with two adenosine A2A receptors and two dopamine D2 receptors). The A2A–D2 receptor heterotetramer has been identified as a primary pharmacological target of caffeine, primarily because it mediates some of its psychostimulant effects and its pharmacodynamic interactions with dopaminergic psychostimulants. Caffeine also causes the release of dopamine in the dorsal striatum and nucleus accumbens core (a substructure within the ventral striatum), but not the nucleus accumbens shell, by antagonizing A1 receptors in the axon terminal of dopamine neurons and A1–A2A heterodimers (a receptor complex composed of one adenosine A1 receptor and one adenosine A2A receptor) in the axon terminal of glutamate neurons. During chronic caffeine use, caffeine-induced dopamine release within the nucleus accumbens core is markedly reduced due to drug tolerance. Enzyme targets Caffeine, like other xanthines, also acts as a phosphodiesterase inhibitor. As a competitive nonselective phosphodiesterase inhibitor, caffeine raises intracellular cyclic AMP, activates protein kinase A, inhibits TNF-alpha and leukotriene synthesis, and reduces inflammation and innate immunity. Caffeine also affects the cholinergic system where it is a moderate inhibitor of the enzyme acetylcholinesterase. Pharmacokinetics Caffeine from coffee or other beverages is absorbed by the small intestine within 45 minutes of ingestion and distributed throughout all bodily tissues. Peak blood concentration is reached within 1–2 hours. It is eliminated by first-order kinetics. Caffeine can also be absorbed rectally, evidenced by suppositories of ergotamine tartrate and caffeine (for the relief of migraine) and of chlorobutanol and caffeine (for the treatment of hyperemesis). However, rectal absorption is less efficient than oral: the maximum concentration (Cmax) and total amount absorbed (AUC) are both about 30% (i.e., 1/3.5) of the oral amounts. Caffeine's biological half-life – the time required for the body to eliminate one-half of a dose – varies widely among individuals according to factors such as pregnancy, other drugs, liver enzyme function level (needed for caffeine metabolism) and age. In healthy adults, caffeine's half-life is between 3 and 7 hours. The half-life is decreased by 30-50% in adult male smokers, approximately doubled in women taking oral contraceptives, and prolonged in the last trimester of pregnancy. In newborns the half-life can be 80 hours or more, dropping rapidly with age, possibly to less than the adult value by age 6 months. The antidepressant fluvoxamine (Luvox) reduces the clearance of caffeine by more than 90%, and increases its elimination half-life more than tenfold, from 4.9 hours to 56 hours. Caffeine is metabolized in the liver by the cytochrome P450 oxidase enzyme system (particularly by the CYP1A2 isozyme) into three dimethylxanthines, each of which has its own effects on the body: Paraxanthine (84%): Increases lipolysis, leading to elevated glycerol and free fatty acid levels in blood plasma. Theobromine (12%): Dilates blood vessels and increases urine volume. Theobromine is also the principal alkaloid in the cocoa bean (chocolate). Theophylline (4%): Relaxes smooth muscles of the bronchi, and is used to treat asthma. The therapeutic dose of theophylline, however, is many times greater than the levels attained from caffeine metabolism. 1,3,7-Trimethyluric acid is a minor caffeine metabolite. 7-Methylxanthine is also a metabolite of caffeine. Each of the above metabolites is further metabolized and then excreted in the urine. Caffeine can accumulate in individuals with severe liver disease, increasing its half-life. A 2011 review found that increased caffeine intake was associated with a variation in two genes that increase the rate of caffeine catabolism. Subjects who had this mutation on both chromosomes consumed 40 mg more caffeine per day than others. This is presumably due to the need for a higher intake to achieve a comparable desired effect, not that the gene led to a disposition for greater incentive of habituation. Chemistry Pure anhydrous caffeine is a bitter-tasting, white, odorless powder with a melting point of 235–238 °C. Caffeine is moderately soluble in water at room temperature (2 g/100 mL), but quickly soluble in boiling water (66 g/100 mL). It is also moderately soluble in ethanol (1.5 g/100 mL). It is weakly basic (pKa of conjugate acid = ~0.6) requiring strong acid to protonate it. Caffeine does not contain any stereogenic centers and hence is classified as an achiral molecule. The xanthine core of caffeine contains two fused rings, a pyrimidinedione and imidazole. The pyrimidinedione in turn contains two amide functional groups that exist predominantly in a zwitterionic resonance the location from which the nitrogen atoms are double bonded to their adjacent amide carbons atoms. Hence all six of the atoms within the pyrimidinedione ring system are sp2 hybridized and planar. The imidazole ring also has a resonance. Therefore, the fused 5,6 ring core of caffeine contains a total of ten pi electrons and hence according to Hückel's rule is aromatic. Synthesis The biosynthesis of caffeine is an example of convergent evolution among different species. Caffeine may be synthesized in the lab starting with 1,3-dimethylurea and malonic acid. Production of synthesized caffeine largely takes place in pharmaceutical plants in China. Synthetic and natural caffeine are chemically identical and nearly indistinguishable. The primary distinction is that synthetic caffeine is manufactured from urea and chloroacetic acid, while natural caffeine is extracted from plant sources, a process known as decaffeination. Despite the different production methods, the final product and its effects on the body are similar. Research on synthetic caffeine supports that it has the same stimulating effects on the body as natural caffeine. And although many claim that natural caffeine is absorbed slower and therefore leads to a gentler caffeine crash, there is little scientific evidence supporting the notion. Decaffeination Germany, the birthplace of decaffeinated coffee, is home to several decaffeination plants, including the world's largest, Coffein Compagnie. Over half of the decaf coffee sold in the U.S. first travels from the tropics to Germany for caffeine removal before making its way to American consumers. Extraction of caffeine from coffee, to produce caffeine and decaffeinated coffee, can be performed using a number of solvents. Following are main methods: Water extraction: Coffee beans are soaked in water. The water, which contains many other compounds in addition to caffeine and contributes to the flavor of coffee, is then passed through activated charcoal, which removes the caffeine. The water can then be put back with the beans and evaporated dry, leaving decaffeinated coffee with its original flavor. Coffee manufacturers recover the caffeine and resell it for use in soft drinks and over-the-counter caffeine tablets. Supercritical carbon dioxide extraction: Supercritical carbon dioxide is an excellent nonpolar solvent for caffeine, and is safer than the organic solvents that are otherwise used. The extraction process is simple: is forced through the green coffee beans at temperatures above 31.1 °C and pressures above 73 atm. Under these conditions, is in a "supercritical" state: It has gaslike properties that allow it to penetrate deep into the beans but also liquid-like properties that dissolve 97–99% of the caffeine. The caffeine-laden is then sprayed with high-pressure water to remove the caffeine. The caffeine can then be isolated by charcoal adsorption (as above) or by distillation, recrystallization, or reverse osmosis. Extraction by organic solvents: Certain organic solvents such as ethyl acetate present much less health and environmental hazard than chlorinated and aromatic organic solvents used formerly. Another method is to use triglyceride oils obtained from spent coffee grounds. "Decaffeinated" coffees do in fact contain caffeine in many cases – some commercially available decaffeinated coffee products contain considerable levels. One study found that decaffeinated coffee contained 10 mg of caffeine per cup, compared to approximately 85 mg of caffeine per cup for regular coffee. Detection in body fluids Caffeine can be quantified in blood, plasma, or serum to monitor therapy in neonates, confirm a diagnosis of poisoning, or facilitate a medicolegal death investigation. Plasma caffeine levels are usually in the range of 2–10 mg/L in coffee drinkers, 12–36 mg/L in neonates receiving treatment for apnea, and 40–400 mg/L in victims of acute overdosage. Urinary caffeine concentration is frequently measured in competitive sports programs, for which a level in excess of 15 mg/L is usually considered to represent abuse. Analogs Some analog substances have been created which mimic caffeine's properties with either function or structure or both. Of the latter group are the xanthines DMPX and 8-chlorotheophylline, which is an ingredient in dramamine. Members of a class of nitrogen substituted xanthines are often proposed as potential alternatives to caffeine. Many other xanthine analogues constituting the adenosine receptor antagonist class have also been elucidated. Some other caffeine analogs: Dipropylcyclopentylxanthine 8-Cyclopentyl-1,3-dimethylxanthine 8-Phenyltheophylline Precipitation of tannins Caffeine, as do other alkaloids such as cinchonine, quinine or strychnine, precipitates polyphenols and tannins. This property can be used in a quantitation method. Natural occurrence Around thirty plant species are known to contain caffeine. Common sources are the "beans" (seeds) of the two cultivated coffee plants, Coffea arabica and Coffea canephora (the quantity varies, but 1.3% is a typical value); and of the cocoa plant, Theobroma cacao; the leaves of the tea plant; and kola nuts. Other sources include the leaves of yaupon holly, South American holly yerba mate, and Amazonian holly guayusa; and seeds from Amazonian maple guarana berries. Temperate climates around the world have produced unrelated caffeine-containing plants. Caffeine in plants acts as a natural pesticide: it can paralyze and kill predator insects feeding on the plant. High caffeine levels are found in coffee seedlings when they are developing foliage and lack mechanical protection. In addition, high caffeine levels are found in the surrounding soil of coffee seedlings, which inhibits seed germination of nearby coffee seedlings, thus giving seedlings with the highest caffeine levels fewer competitors for existing resources for survival. Caffeine is stored in tea leaves in two places. Firstly, in the cell vacuoles where it is complexed with polyphenols. This caffeine probably is released into the mouth parts of insects, to discourage herbivory. Secondly, around the vascular bundles, where it probably inhibits pathogenic fungi from entering and colonizing the vascular bundles. Caffeine in nectar may improve the reproductive success of the pollen producing plants by enhancing the reward memory of pollinators such as honey bees. The differing perceptions in the effects of ingesting beverages made from various plants containing caffeine could be explained by the fact that these beverages also contain varying mixtures of other methylxanthine alkaloids, including the cardiac stimulants theophylline and theobromine, and polyphenols that can form insoluble complexes with caffeine. Products Products containing caffeine include coffee, tea, soft drinks ("colas"), energy drinks, other beverages, chocolate, caffeine tablets, other oral products, and inhalation products. According to a 2020 study in the United States, coffee is the major source of caffeine intake in middle-aged adults, while soft drinks and tea are the major sources in adolescents. Energy drinks are more commonly consumed as a source of caffeine in adolescents as compared to adults. Beverages Coffee The world's primary source of caffeine is the coffee "bean" (the seed of the coffee plant), from which coffee is brewed. Caffeine content in coffee varies widely depending on the type of coffee bean and the method of preparation used; even beans within a given bush can show variations in concentration. In general, one serving of coffee ranges from 80 to 100 milligrams, for a single shot (30 milliliters) of arabica-variety espresso, to approximately 100–125 milligrams for a cup (120 milliliters) of drip coffee. Arabica coffee typically contains half the caffeine of the robusta variety. In general, dark-roast coffee has slightly less caffeine than lighter roasts because the roasting process reduces caffeine content of the bean by a small amount. Tea Tea contains more caffeine than coffee by dry weight. A typical serving, however, contains much less, since less of the product is used as compared to an equivalent serving of coffee. Also contributing to caffeine content are growing conditions, processing techniques, and other variables. Thus, teas contain varying amounts of caffeine. Tea contains small amounts of theobromine and slightly higher levels of theophylline than coffee. Preparation and many other factors have a significant impact on tea, and color is a poor indicator of caffeine content. Teas like the pale Japanese green tea, gyokuro, for example, contain far more caffeine than much darker teas like lapsang souchong, which has minimal caffeine content. Soft drinks and energy drinks Caffeine is also a common ingredient of soft drinks, such as cola, originally prepared from kola nuts. Soft drinks typically contain 0 to 55 milligrams of caffeine per 12 ounce () serving. By contrast, energy drinks, such as Red Bull, can start at 80 milligrams of caffeine per serving. The caffeine in these drinks either originates from the ingredients used or is an additive derived from the product of decaffeination or from chemical synthesis. Guarana, a primary ingredient of energy drinks, contains large amounts of caffeine with small amounts of theobromine and theophylline in a naturally occurring slow-release excipient. Other beverages Maté is a drink popular in many parts of South America. Its preparation consists of filling a gourd with the leaves of the South American holly yerba mate, pouring hot but not boiling water over the leaves, and drinking with a straw, the bombilla, which acts as a filter so as to draw only the liquid and not the yerba leaves. Guaraná is a soft drink originating in Brazil made from the seeds of the Guaraná fruit. The leaves of Ilex guayusa, the Ecuadorian holly tree, are placed in boiling water to make a guayusa tea. The leaves of Ilex vomitoria, the yaupon holly tree, are placed in boiling water to make a yaupon tea. Commercially prepared coffee-flavoured milk beverages are popular in Australia. Examples include Oak's Ice Coffee and Farmers Union Iced Coffee. The amount of caffeine in these beverages can vary widely. Caffeine concentrations can differ significantly from the manufacturer's claims. Cacao solids Cocoa solids (derived from cocoa bean) contain 230 mg caffeine per 100 g. The caffeine content varies between cocoa bean strains. Caffeine content mg/g (sorted by lowest caffeine content): Forastero (defatted): 1.3 mg/g Nacional (defatted): 2.4 mg/g Trinitario (defatted): 6.3/g Criollo (defatted): 11.3 mg/g Chocolate Caffeine per 100 g: Dark chocolate, 70-85% cacao solids: 80 mg Dark chocolate, 60-69% cacao solids: 86 mg Dark chocolate, 45- 59% cacao solids: 43 mg Milk chocolate: 20 mg The stimulant effect of chocolate may be due to a combination of theobromine and theophylline, as well as caffeine. Tablets Tablets offer several advantages over coffee, tea, and other caffeinated beverages, including convenience, known dosage, and avoidance of concomitant intake of sugar, acids, and fluids. The use of caffeine in this form is said to improve mental alertness. These tablets are commonly used by students studying for their exams and by people who work or drive for long hours. Other oral products One U.S. company is marketing oral dissolvable caffeine strips. Another intake route is SpazzStick, a caffeinated lip balm. Alert Energy Caffeine Gum was introduced in the United States in 2013, but was voluntarily withdrawn after an announcement of an investigation by the FDA of the health effects of added caffeine in foods. Inhalants Similar to an e-cigarette, a caffeine inhaler may be used to deliver caffeine or a stimulant like guarana by vaping. In 2012, the FDA sent a warning letter to one of the companies marketing an inhaler, expressing concerns for the lack of safety information available about inhaled caffeine. Combinations with other drugs Some beverages combine alcohol with caffeine to create a caffeinated alcoholic drink. The stimulant effects of caffeine may mask the depressant effects of alcohol, potentially reducing the user's awareness of their level of intoxication. Such beverages have been the subject of bans due to safety concerns. In particular, the United States Food and Drug Administration has classified caffeine added to malt liquor beverages as an "unsafe food additive". Ya ba contains a combination of methamphetamine and caffeine. Painkillers such as propyphenazone/paracetamol/caffeine combine caffeine with an analgesic. History Discovery and spread of use According to Chinese legend, the Chinese emperor Shennong, reputed to have reigned in about 3000 BCE, inadvertently discovered tea when he noted that when certain leaves fell into boiling water, a fragrant and restorative drink resulted. Shennong is also mentioned in Lu Yu's Cha Jing, a famous early work on the subject of tea. The earliest credible evidence of either coffee drinking or knowledge of the coffee plant appears in the middle of the fifteenth century, in the Sufi monasteries of the Yemen in southern Arabia. From Mocha, coffee spread to Egypt and North Africa, and by the 16th century, it had reached the rest of the Middle East, Persia and Turkey. From the Middle East, coffee drinking spread to Italy, then to the rest of Europe, and coffee plants were transported by the Dutch to the East Indies and to the Americas. Kola nut use appears to have ancient origins. It is chewed in many West African cultures, in both private and social settings, to restore vitality and ease hunger pangs. The earliest evidence of cocoa bean use comes from residue found in an ancient Mayan pot dated to 600 BCE. Also, chocolate was consumed in a bitter and spicy drink called xocolatl, often seasoned with vanilla, chile pepper, and achiote. Xocolatl was believed to fight fatigue, a belief probably attributable to the theobromine and caffeine content. Chocolate was an important luxury good throughout pre-Columbian Mesoamerica, and cocoa beans were often used as currency. Xocolatl was introduced to Europe by the Spaniards, and became a popular beverage by 1700. The Spaniards also introduced the cacao tree into the West Indies and the Philippines. The leaves and stems of the yaupon holly (Ilex vomitoria) were used by Native Americans to brew a tea called asi or the "black drink". Archaeologists have found evidence of this use far into antiquity, possibly dating to Late Archaic times. Chemical identification, isolation, and synthesis In 1819, the German chemist Friedlieb Ferdinand Runge isolated caffeine for the first time; he called it "Kaffebase" (i.e., a base that exists in coffee). According to Runge, he did this at the behest of Johann Wolfgang von Goethe. In 1821, caffeine was isolated both by the French chemist Pierre Jean Robiquet and by another pair of French chemists, Pierre-Joseph Pelletier and Joseph Bienaimé Caventou, according to Swedish chemist Jöns Jacob Berzelius in his yearly journal. Furthermore, Berzelius stated that the French chemists had made their discoveries independently of any knowledge of Runge's or each other's work. However, Berzelius later acknowledged Runge's priority in the extraction of caffeine, stating: "However, at this point, it should not remain unmentioned that Runge (in his Phytochemical Discoveries, 1820, pages 146–147) specified the same method and described caffeine under the name Caffeebase a year earlier than Robiquet, to whom the discovery of this substance is usually attributed, having made the first oral announcement about it at a meeting of the Pharmacy Society in Paris." Pelletier's article on caffeine was the first to use the term in print (in the French form from the French word for coffee: ). It corroborates Berzelius's account: Robiquet was one of the first to isolate and describe the properties of pure caffeine, whereas Pelletier was the first to perform an elemental analysis. In 1827, M. Oudry isolated "théine" from tea, but in 1838 it was proved by Mulder and by Carl Jobst that theine was actually the same as caffeine. In 1895, German chemist Hermann Emil Fischer (1852–1919) first synthesized caffeine from its chemical components (i.e. a "total synthesis"), and two years later, he also derived the structural formula of the compound. This was part of the work for which Fischer was awarded the Nobel Prize in 1902. Historic regulations Because it was recognized that coffee contained some compound that acted as a stimulant, first coffee and later also caffeine has sometimes been subject to regulation. For example, in the 16th century Islamists in Mecca and in the Ottoman Empire made coffee illegal for some classes. Charles II of England tried to ban it in 1676, Frederick II of Prussia banned it in 1777, and coffee was banned in Sweden at various times between 1756 and 1823. In 1911, caffeine became the focus of one of the earliest documented health scares, when the US government seized 40 barrels and 20 kegs of Coca-Cola syrup in Chattanooga, Tennessee, alleging the caffeine in its drink was "injurious to health". Although the Supreme Court later ruled in favor of Coca-Cola in United States v. Forty Barrels and Twenty Kegs of Coca-Cola, two bills were introduced to the U.S. House of Representatives in 1912 to amend the Pure Food and Drug Act, adding caffeine to the list of "habit-forming" and "deleterious" substances, which must be listed on a product's label. Society and culture Regulations United States The US Food and Drug Administration (FDA) considers safe beverages containing less than 0.02% caffeine; but caffeine powder, which is sold as a dietary supplement, is unregulated. It is a regulatory requirement that the label of most prepackaged foods must declare a list of ingredients, including food additives such as caffeine, in descending order of proportion. However, there is no regulatory provision for mandatory quantitative labeling of caffeine, (e.g., milligrams caffeine per stated serving size). There are a number of food ingredients that naturally contain caffeine. These ingredients must appear in food ingredient lists. However, as is the case for "food additive caffeine", there is no requirement to identify the quantitative amount of caffeine in composite foods containing ingredients that are natural sources of caffeine. While coffee or chocolate are broadly recognized as caffeine sources, some ingredients (e.g., guarana, yerba maté) are likely less recognized as caffeine sources. For these natural sources of caffeine, there is no regulatory provision requiring that a food label identify the presence of caffeine nor state the amount of caffeine present in the food. The FDA guidance was updated in 2018. Consumption Global consumption of caffeine has been estimated at 120,000 tonnes per year, making it the world's most popular psychoactive substance. The consumption of caffeine has remained stable between 1997 and 2015. Coffee, tea and soft drinks are the most common caffeine sources, with energy drinks contributing little to the total caffeine intake across all age groups. Religions The Seventh-day Adventist Church asked for its members to "abstain from caffeinated drinks", but has removed this from baptismal vows (while still recommending abstention as policy). Some from these religions believe that one is not supposed to consume a non-medical, psychoactive substance, or believe that one is not supposed to consume a substance that is addictive. The Church of Jesus Christ of Latter-day Saints has said the following with regard to caffeinated beverages: "... the Church revelation spelling out health practices (Doctrine and Covenants 89) does not mention the use of caffeine. The Church's health guidelines prohibit alcoholic drinks, smoking or chewing of tobacco, and 'hot drinks' – taught by Church leaders to refer specifically to tea and coffee." Gaudiya Vaishnavas generally also abstain from caffeine, because they believe it clouds the mind and overstimulates the senses. To be initiated under a guru, one must have had no caffeine, alcohol, nicotine or other drugs, for at least a year. Caffeinated beverages are widely consumed by Muslims. In the 16th century, some Muslim authorities made unsuccessful attempts to ban them as forbidden "intoxicating beverages" under Islamic dietary laws. Other organisms The bacteria Pseudomonas putida CBB5 can live on pure caffeine and can cleave caffeine into carbon dioxide and ammonia. Caffeine is toxic to birds and to dogs and cats, and has a pronounced adverse effect on mollusks, various insects, and spiders. This is at least partly due to a poor ability to metabolize the compound, causing higher levels for a given dose per unit weight. Caffeine has also been found to enhance the reward memory of honey bees. Research Caffeine has been used to double chromosomes in haploid wheat.
Biology and health sciences
Biochemistry and molecular biology
null
6884
https://en.wikipedia.org/wiki/Clitoris
Clitoris
In amniotes, the clitoris ( or ; : clitorises or clitorides) is a female sex organ. In humans, it is the vulva's most erogenous area and generally the primary anatomical source of female sexual pleasure. The clitoris is a complex structure, and its size and sensitivity can vary. The visible portion, the glans, of the clitoris is typically roughly the size and shape of a pea and is estimated to have at least 8,000 nerve endings. Sexological, medical, and psychological debate has focused on the clitoris, and it has been subject to social constructionist analyses and studies. Such discussions range from anatomical accuracy, gender inequality, female genital mutilation, and orgasmic factors and their physiological explanation for the G-spot. The only known purpose of the human clitoris is to provide sexual pleasure. Knowledge of the clitoris is significantly affected by its cultural perceptions. Studies suggest that knowledge of its existence and anatomy is scant in comparison with that of other sexual organs (especially male sex organs) and that more education about it could help alleviate stigmas, such as the idea that the clitoris and vulva in general are visually unappealing or that female masturbation is taboo and disgraceful. The clitoris is homologous to the penis in males. Etymology and terminology The Oxford English Dictionary states that the Neo-Latin word clītoris likely has its origin in the Ancient Greek (), which means "little hill", and perhaps derived from the verb (), meaning "to shut" or "to sheathe". Clitoris is also related to the Greek word (), "key", "indicating that the ancient anatomists considered it the key" to female sexuality. In addition, the Online Etymology Dictionary suggests other Greek candidates for this word's etymology include a noun meaning "latch" or "hook" or a verb meaning "to touch or titillate lasciviously", "to tickle". The Oxford English Dictionary also states that the colloquially shortened form clit, the first occurrence of which was noted in the United States, has been used in print since 1958: until then, the common abbreviation was clitty. Other slang terms for clitoris are bean, nub, and love button. The term is commonly used to refer to the glans alone. In recent anatomical works, the clitoris has also been referred to as the bulbo-clitoral organ. Structure Most of the clitoris is composed of internal parts. Regarding humans, it consists of the glans, the body (which is composed of two erectile structures known as the corpora cavernosa), the prepuce, and the root. The frenulum is beneath the glans. Research indicates that clitoral tissue extends into the vaginal anterior wall. Şenaylı et al. said that the histological evaluation of the clitoris, "especially of the corpora cavernosa, is incomplete because for many years the clitoris was considered a rudimentary and nonfunctional organ". They added that Baskin and colleagues examined the clitoris' masculinization after dissection and using imaging software after Masson's trichrome staining, put the serial dissected specimens together; this revealed that nerves surround the whole clitoral body. The clitoris, its bulbs, labia minora, and urethra involve two histologically distinct types of vascular tissue (tissue related to blood vessels), the first of which is trabeculated, erectile tissue innervated by the cavernous nerves. The trabeculated tissue has a spongy appearance; along with blood, it fills the large, dilated vascular spaces of the clitoris and the bulbs. Beneath the epithelium of the vascular areas is smooth muscle. As indicated by Yang etal.'s research, it may also be that the urethral lumen (the inner open space or cavity of the urethra), which is surrounded by a spongy tissue, has tissue that "is grossly distinct from the vascular tissue of the clitoris and bulbs, and on macroscopic observation, is paler than the dark tissue" of the clitoris and bulbs. The second type of vascular tissue is non-erectile, which may consist of blood vessels that are dispersed within a fibrous matrix and have only a minimal amount of smooth muscle. Glans Highly innervated, the clitoral glans (glans means "acorn" in Latin), also known as the "head" or "tip", exists at the top of the clitoral body as a fibro-vascular cap and is usually the size and shape of a pea, although it is sometimes much larger or smaller. The glans is separated from the clitoral body by a ridge of tissue called the corona. The clitoral glans is estimated to have 8,000 and possibly 10,000 or more sensory nerve endings, making it the most sensitive erogenous zone. The glans also has numerous genital corpuscles. Research conflicts on whether the glans is composed of erectile or non-erectile tissue. Some sources describe the clitoral glans and labia minora as composed of non-erectile tissue; this is especially the case for the glans. They state that the clitoral glans and labia minora have blood vessels that are dispersed within a fibrous matrix and have only a minimal amount of smooth muscle, or that the clitoral glans is "a midline, densely neural, non-erectile structure". The clitoral glans is homologous to the male penile glans. Other descriptions of the glans assert that it is composed of erectile tissue and that erectile tissue is present within the labia minora. The glans may be noted as having glandular vascular spaces that are not as prominent as those in the clitoral body, with the spaces being separated more by smooth muscle than in the body and crura. Adipose tissue is absent in the labia minora, but the organ may be described as being made up of dense connective tissue, erectile tissue and elastic fibers. Frenulum The clitoral frenulum or frenum (frenulum clitoridis and crus glandis clitoridis in Latin; the former meaning "little bridle") is a medial band of tissue formed between the undersurface of the glans and the top ends of the labia minora. It is homologous to the penile frenulum in males. The frenulum's main function is to maintain the clitoris in its innate position. Body The clitoral body (also known as the shaft of the clitoris) is a portion behind the glans that contains the union of the corpora cavernosa, a pair of sponge-like regions of erectile tissue that hold most of the blood in the clitoris during erection. It is homologous to the penile shaft in the male. The two corpora forming the clitoral body are surrounded by thick fibro-elastic tunica albuginea, a sheath of connective tissue. These corpora are separated incompletely from each other in the midline by a fibrous pectiniform septuma comblike band of connective tissue extending between the corpora cavernosa. The clitoral body is also connected to the pubic symphysis by the suspensory ligament. The body of the clitoris is a bent shape, which makes the clitoral angle or elbow. The angle divides the body into the ascending part (internal) near the pubic symphysis and the descending part (external), which can be seen and felt through the clitoral hood. Root Lying in the perineum (space between the vulva and anus) and within the superficial perineal pouch is the root of the clitoris, which consists of the posterior ends of the clitoris, the crura and the bulbs of vestibule. The crura ("legs") are the parts of the corpora cavernosa extending from the clitoral body and form an upside-down "V" shape. Each crus (singular form of crura) is attached to the corresponding ischial ramusextensions of the corpora beneath the descending pubic rami. Concealed behind the labia minora, the crura end with attachment at or just below the middle of the pubic arch. Associated are the urethral sponge, perineal sponge, a network of nerves and blood vessels, the suspensory ligament of the clitoris, muscles and the pelvic floor. The vestibular bulbs are more closely related to the clitoris than the vestibule because of the similarity of the trabecular and erectile tissue within the clitoris and its bulbs, and the absence of trabecular tissue in other parts of the vulva, with the erectile tissue's trabecular nature allowing engorgement and expansion during sexual arousal. The vestibular bulbs are typically described as lying close to the crura on either side of the vaginal opening; internally, they are beneath the labia majora. The anterior sections of the bulbs unite to create the bulbar commissure, which forms a long strip of erectile tissue dubbed the infra-corporeal residual spongy part (RSP) that expands from the ventral shaft and terminates as the glans. The RSP is also connected to the shaft via the pars intermedia (venous plexus of Kobelt). When engorged with blood, the bulbs cuff the vaginal opening and cause the vulva to expand outward. Although several texts state that they surround the vaginal opening, Ginger etal. state that this does not appear to be the case and tunica albuginea does not envelop the erectile tissue of the bulbs. In Yang et al.'s assessment of the bulbs' anatomy, they conclude that the bulbs "arch over the distal urethra, outlining what might be appropriately called the 'bulbar urethra' in women". Hood The clitoral hood or prepuce projects at the front of the labia commissure, where the edges of the labia majora meet at the base of the pubic mound. It is partially formed by fusion of the upper labia minora. The hood's function is to cover and protect the glans and external shaft. There is considerable variation in how much of the glans protrudes from the hood and how much is covered by it, ranging from completely covered to fully exposed, and tissue of the labia minora also encircles the base of the glans. Size and length There is no identified correlation between the size of the glans or clitoris as a whole, and a woman's age, height, weight, use of hormonal contraception, or being postmenopausal, although women who have given birth may have significantly larger clitoral measurements. Centimetre and millimetre measurements of the clitoris show variations in size. The clitoral glans has been cited as typically varying from 2 mm to 1 cm (less than an inch) and usually being estimated at 4 to 5 mm in both the transverse and longitudinal planes. A 1992 study concluded that the total clitoral length, including glans and body, is , where is the mean and is the standard deviation. Concerning other studies, researchers from the Elizabeth Garrett Anderson and Obstetric Hospital in London measured the labia and other genital structures of 50 women from the age of 18 to 50, with a mean age of 35.6., from 2003 to 2004, and the results given for the clitoral glans were 310 mm for the range and 5.5 [1.7] mm for the mean. Other research indicates that the clitoral body can measure in length, while the clitoral body and crura together can be or more in length. Development The clitoris develops from a phallic outgrowth in the embryo called the genital tubercle. In the absence of testosterone, the genital tubercle allows for the formation of the clitoris; the initially rapid growth of the phallus gradually slows and the body and glans of the clitoris are formed along with its other structures. Function Sexual stimulation and arousal The clitoris has an abundance of nerve endings, and is the human female's most erogenous part of the body. When sexually stimulated, it may incite sexual arousal, which may result from mental stimulation (sexual fantasy), activity with a sexual partner, or masturbation, and can lead to orgasm. The most effective sexual stimulation of this organ is usually manually or orally, which is often referred to as direct clitoral stimulation; in cases involving sexual penetration, these activities may also be referred to as additional or assisted clitoral stimulation. Direct stimulation involves physical stimulation to the external anatomy of the clitorisglans, hood, and shaft. Stimulation of the labia minora, due to it being connected with the glans and hood, may have the same effect as direct clitoral stimulation. Though these areas may also receive indirect physical stimulation during sexual activity, such as when in friction with the labia majora, indirect clitoral stimulation is more commonly attributed to penile-vaginal penetration. Penile-anal penetration may also indirectly stimulate the clitoris by the shared sensory nerves (especially the pudendal nerve, which gives off the inferior anal nerves and divides into two terminal branches: the perineal nerve and the dorsal nerve of the clitoris). Due to the glans' high sensitivity, direct stimulation to it is not always pleasurable; instead, direct stimulation to the hood or near the glans is often more pleasurable, with the majority of women preferring to use the hood to stimulate the glans, or to have the glans rolled between the labia, for indirect touch. It is also common for women to enjoy the shaft being softly caressed in concert with the occasional circling of the glans. This might be with or without digital penetration of the vagina, while other women enjoy having the entire vulva caressed. As opposed to the use of dry fingers, stimulation from well-lubricated fingers, either by vaginal lubrication or a personal lubricant, is usually more pleasurable for the external clitoris. As the clitoris' external location does not allow for direct stimulation by penetration, any external clitoral stimulation while in the missionary position usually results from the pubic bone area. As such, some couples may engage in the woman-on-top position or the coital alignment technique, a sex position combining the "riding high" variation of the missionary position with pressure-counterpressure movements performed by each partner in rhythm with sexual penetration, to maximize clitoral stimulation. Same-sex female couples may engage in tribadism (vulva-to-vulva or vulva-to-body rubbing) for ample or mutual clitoral stimulation during whole-body contact. Pressing the penis in a gliding or circular motion against the clitoris or stimulating it by the movement against another body part may also be practiced. A vibrator (such as a clitoral vibrator), dildo or other sex toy may be used. Other women stimulate the clitoris by use of a pillow or other inanimate object, by a jet of water from the faucet of a bathtub or shower, or by closing their legs and rocking. During sexual arousal, the clitoris and the rest of the vulva engorge and change color as the erectile tissues fill with blood (vasocongestion), and the individual experiences vaginal contractions. The ischiocavernosus and bulbocavernosus muscles, which insert into the corpora cavernosa, contract and compress the dorsal vein of the clitoris (the only vein that drains the blood from the spaces in the corpora cavernosa), and the arterial blood continues a steady flow and having no way to drain out, fills the venous spaces until they become turgid and engorged with blood. This is what leads to clitoral erection. The prepuce has retracted and the glans becomes more visible. The glans doubles in diameter upon arousal and further stimulation becomes less visible as it is covered by the swelling of the clitoral hood. The swelling protects the glans from direct contact, as direct contact at this stage can be more irritating than pleasurable. Vasocongestion eventually triggers a muscular reflex, which expels the blood that was trapped in surrounding tissues, and leads to an orgasm. A short time after stimulation has stopped, especially if orgasm has been achieved, the glans becomes visible again and returns to its normal state, with a few seconds (usually 510) to return to its normal position and 510 minutes to return to its original size. If orgasm is not achieved, the clitoris may remain engorged for a few hours, which women often find uncomfortable. Additionally, the clitoris is very sensitive after orgasm, making further stimulation initially painful for some women. Clitoral and vaginal orgasmic factors General statistics indicate that 7080 percent of women require direct clitoral stimulation (consistent manual, oral, or other concentrated friction against the external parts of the clitoris) to reach orgasm. Indirect clitoral stimulation (for example, by means of vaginal penetration) may also be sufficient for female orgasm. The area near the entrance of the vagina (the lower third) contains nearly 90 percent of the vaginal nerve endings, and there are areas in the anterior vaginal wall and between the top junction of the labia minora and the urinary meatus that are especially sensitive, but intense sexual pleasure, including orgasm, solely from vaginal stimulation is occasional or otherwise absent because the vagina has significantly fewer nerve endings than the clitoris. The prominent debate over the quantity of vaginal nerve endings began with Alfred Kinsey. Although Sigmund Freud's theory that clitoral orgasms are a prepubertal or adolescent phenomenon and that vaginal (or G-spot) orgasms are something that only physically mature females experience had been criticized before, Kinsey was the first researcher to harshly criticize the theory. Through his observations of female masturbation and interviews with thousands of women, Kinsey found that most of the women he observed and surveyed could not have vaginal orgasms, a finding that was also supported by his knowledge of sex organ anatomy. Scholar JaniceM. Irvine stated that he "criticized Freud and other theorists for projecting male constructs of sexuality onto women" and "viewed the clitoris as the main center of sexual response". He considered the vagina to be "relatively unimportant" for sexual satisfaction, relaying that "few women inserted fingers or objects into their vaginas when they masturbated". Believing that vaginal orgasms are "a physiological impossibility" because the vagina has insufficient nerve endings for sexual pleasure or climax, he "concluded that satisfaction from penile penetration [is] mainly psychological or perhaps the result of referred sensation". Masters and Johnson's research, as well as Shere Hite's, generally supported Kinsey's findings about the female orgasm. Masters and Johnson were the first researchers to determine that the clitoral structures surround and extend along and within the labia. They observed that both clitoral and vaginal orgasms have the same stages of physical response, and found that the majority of their subjects could only achieve clitoral orgasms, while a minority achieved vaginal orgasms. On that basis, they argued that clitoral stimulation is the source of both kinds of orgasms, reasoning that the clitoris is stimulated during penetration by friction against its hood. The research came at the time of the second-wave feminist movement, which inspired feminists to reject the distinction made between clitoral and vaginal orgasms. Feminist Anne Koedt argued that because men "have orgasms essentially by friction with the vagina" and not the clitoral area, this is why women's biology had not been properly analyzed. "Today, with extensive knowledge of anatomy, with [C. Lombard Kelly], Kinsey, and Masters and Johnson, to mention just a few sources, there is no ignorance on the subject [of the female orgasm]", she stated in her 1970 article The Myth of the Vaginal Orgasm. She added, "There are, however, social reasons why this knowledge has not been popularized. We are living in a male society which has not sought change in women's role". Supporting an anatomical relationship between the clitoris and vagina is a study published in 2005, which investigated the size of the clitoris; Australian urologist Helen O'Connell, described as having initiated discourse among mainstream medical professionals to refocus on and redefine the clitoris, noted a direct relationship between the legs or roots of the clitoris and the erectile tissue of the bulbs and corpora, and the distal urethra and vagina while using magnetic resonance imaging (MRI) technology. While some studies, using ultrasound, have found physiological evidence of the G-spot in women who report having orgasms during vaginal intercourse, O'Connell argues that this interconnected relationship is the physiological explanation for the conjectured G-spot and experience of vaginal orgasms, taking into account the stimulation of the internal parts of the clitoris during vaginal penetration. "The vaginal wall is, in fact, the clitoris", she said. "If you lift the skin off the vagina on the side walls, you get the bulbs of the clitoristriangular, crescental masses of erectile tissue". O'Connell etal., having performed dissections on the vulvas of cadavers and used photography to map the structure of nerves in the clitoris, made the assertion in 1998 that there is more erectile tissue associated with the clitoris than is generally described in anatomical textbooks and were thus already aware that the clitoris is more than just its glans. They concluded that some females have more extensive clitoral tissues and nerves than others, especially having observed this in young cadavers compared to elderly ones, and therefore whereas the majority of females can only achieve orgasm by direct stimulation of the external parts of the clitoris, the stimulation of the more generalized tissues of the clitoris via vaginal intercourse may be sufficient for others. French researchers Odile Buisson (fr) and Pierre Foldès reported similar findings to that of O'Connell's. In 2008, they published the first complete3D sonography of the stimulated clitoris and republished it in 2009 with new research, demonstrating how erectile tissue of the clitoris engorges and surrounds the vagina. Based on their findings, they argued that women may be able to achieve vaginal orgasm through stimulation of the G-spot because the clitoris is pulled closely to the anterior wall of the vagina when the woman is sexually aroused and during vaginal penetration. They assert that since the front wall of the vagina is inextricably linked with the internal parts of the clitoris, stimulating the vagina without activating the clitoris may be next to impossible. In their 2009 published study, it states the "coronal planes during perineal contraction and finger penetration demonstrated a close relationship between the root of the clitoris and the anterior vaginal wall". Buisson and Foldès suggested "that the special sensitivity of the lower anterior vaginal wall could be explained by pressure and movement of clitoris' root during a vaginal penetration and subsequent perineal contraction". Researcher Vincenzo Puppo, who, while agreeing that the clitoris is the center of female sexual pleasure and believing that there is no anatomical evidence of the vaginal orgasm, disagrees with O'Connell and other researchers' terminological and anatomical descriptions of the clitoris (such as referring to the vestibular bulbs as the "clitoral bulbs") and states that "the inner clitoris" does not exist because the penis cannot come in contact with the congregation of multiple nerves/veins situated until the angle of the clitoris, detailed by Georg Ludwig Kobelt, or with the root of the clitoris, which does not have sensory receptors or erogenous sensitivity, during vaginal intercourse. Puppo's belief contrasts the general belief among researchers that vaginal orgasms are the result of clitoral stimulation; they reaffirm that clitoral tissue extends, or is at least stimulated by its bulbs, even in the area most commonly reported to be the G-spot. The G-spot is analogous to the base of the penis and has additionally been theorized, with the sentiment from researcher Amichai Kilchevsky that because female fetal development is the "default" state in the absence of substantial exposure to male hormones and therefore the penis is essentially a clitoris enlarged by such hormones, there is no evolutionary reason why females would have an entity in addition to the clitoris that can produce orgasms. The general difficulty of achieving orgasms vaginally, which is a predicament that is likely due to nature easing the process of childbearing by drastically reducing the number of vaginal nerve endings, challenge arguments that vaginal orgasms help encourage sexual intercourse to facilitate reproduction. Supporting a distinct G-spot, however, is a study by Rutgers University, published in 2011, which was the first to map the female genitals onto the sensory portion of the brain; the scans indicated that the brain registered distinct feelings between stimulating the clitoris, the cervix and the vaginal wallwhere the G-spot is reported to bewhen several women stimulated themselves in a functional magnetic resonance machine. Barry Komisaruk, head of the research findings, stated that he feels that "the bulk of the evidence shows that the G-spot is not a particular thing" and that it is "a region, it's a convergence of many different structures". Vestigiality, adaptionist and reproductive views Whether the clitoris is vestigial, an adaptation, or serves a reproductive function has been debated. Geoffrey Miller stated that Helen Fisher, Meredith Small and Sarah Blaffer Hrdy "have viewed the clitoral orgasm as a legitimate adaptation in its own right, with major implications for female sexual behavior and sexual evolution". Like Lynn Margulis and Natalie Angier, Miller believes, "The human clitoris shows no apparent signs of having evolved directly through male mate choice. It is not especially large, brightly colored, specifically shaped or selectively displayed during courtship". He contrasts this with other female species that have clitorises as long as their male counterparts. He said the human clitoris "could have evolved to be much more conspicuous if males had preferred sexual partners with larger brighter clitorises" and that "its inconspicuous design combined with its exquisite sensitivity suggests that the clitoris is important not as an object of male mate choice, but as a mechanism of female choice". While Miller stated that male scientists such as Stephen Jay Gould and Donald Symons "have viewed the female clitoral orgasm as an evolutionary side-effect of the male capacity for penile orgasm" and that they "suggested that clitoral orgasm cannot be an adaptation because it is too hard to achieve", Gould acknowledged that "most female orgasms emanate from a clitoral, rather than vaginal (or some other), site" and that his nonadaptive belief "has been widely misunderstood as a denial of either the adaptive value of female orgasm in general or even as a claim that female orgasms lack significance in some broader sense". He said that although he accepts that "clitoral orgasm plays a pleasurable and central role in female sexuality and its joys", "[a]ll these favorable attributes, however, emerge just as clearly and just as easily, whether the clitoral site of orgasm arose as a spandrel or an adaptation". He added that the "male biologists who fretted over [the adaptionist questions] simply assumed that a deeply vaginal site, nearer the region of fertilization, would offer greater selective benefit" due to their Darwinian, summum bonum beliefs about enhanced reproductive success. Similar to Gould's beliefs about adaptionist views and that "females grow nipples as adaptations for suckling, and males grow smaller unused nipples as a spandrel based upon the value of single development channels", American philosopher Elisabeth Lloyd suggested that there is little evidence to support an adaptionist account of female orgasm. Canadian sexologist Meredith L. Chivers stated that "Lloyd views female orgasm as an ontogenetic leftover; women have orgasms because the urogenital neurophysiology for orgasm is so strongly selected for in males that this developmental blueprint gets expressed in females without affecting fitness" and this is similar to "males hav[ing] nipples that serve no fitness-related function". At the 2002 conference for Canadian Society of Women in Philosophy, Nancy Tuana argued that the clitoris is unnecessary in reproduction; she stated that it has been ignored because of "a fear of pleasure. It is pleasure separated from reproduction. That's the fear". She reasoned that this fear causes ignorance, which veils female sexuality. O'Connell stated, "It boils down to rivalry between the sexes: the idea that one sex is sexual and the other reproductive. The truth is that both are sexual and both are reproductive". She reiterated that the vestibular bulbs appear to be part of the clitoris and that the distal urethra and vagina are intimately related structures, although they are not erectile in character, forming a tissue cluster with the clitoris that appears to be the location of female sexual function and orgasm. Clinical significance Modification Genital modification may be for aesthetic, medical or cultural reasons. This includes female genital mutilation (FGM), sex reassignment surgery (for trans men as part of transitioning), intersex surgery, and genital piercings. Use of anabolic steroids by bodybuilders and other athletes can result in significant enlargement of the clitoris along with other masculinizing effects on their bodies. Abnormal enlargement of the clitoris may be referred to as clitoromegaly or macroclitoris, but clitoromegaly is more commonly seen as a congenital anomaly of the genitalia. Clitoroplasty, a sex reassignment surgery for trans women, involves the construction of a clitoris from penile tissue. People taking hormones or other medications as part of a gender transition usually experience dramatic clitoral growth; individual desires and the difficulties of phalloplasty (construction of a penis) often result in the retention of the original genitalia with the enlarged clitoris as a penis analog (metoidioplasty). However, the clitoris cannot reach the size of the penis through hormones. Asurgery to add function to the clitoris, such as metoidioplasty, is an alternative to phalloplasty that permits the retention of sexual sensation in the clitoris. In clitoridectomy, the clitoris may be removed as part of a radical vulvectomy to treat cancer such as vulvar intraepithelial neoplasia; however, modern treatments favor more conservative approaches, as invasive surgery can have psychosexual consequences. Clitoridectomy more often involves parts of the clitoris being partially or completely removed during FGM, which may be additionally known as female circumcision or female genital cutting (FGC). Removing the glans does not mean that the whole structure is lost, since the clitoris reaches deep into the genitals. In reduction clitoroplasty, a common intersex surgery, the glans is preserved and parts of the erectile bodies are excised. Problems with this technique include loss of sensation, loss of sexual function, and sloughing of the glans. One way to preserve the clitoris with its innervations and function is to imbricate and bury the glans; however, Şenaylı et al. state that "pain during stimulus because of trapped tissue under the scarring is nearly routine. In another method, 50 percent of the ventral clitoris is removed through the level base of the clitoral shaft, and it is reported that good sensation and clitoral function are observed in follow-up"; additionally, it has "been reported that the complications are from the same as those in the older procedures for this method". Concerning females who have the condition congenital adrenal hyperplasia, the largest group requiring surgical genital correction, researcher Atilla Şenaylı stated, "The main expectations for the operations are to create a normal female anatomy, with minimal complications and improvement of life quality". Şenaylı added that "[c]osmesis, structural integrity, the coital capacity of the vagina, and absence of pain during sexual activity are the parameters to be judged by the surgeon". (Cosmesis usually refers to the surgical correction of a disfiguring defect.) He stated that although "expectations can be standardized within these few parameters, operative techniques have not yet become homogeneous. Investigators have preferred different operations for different ages of patients". Gender assessment and surgical treatment are the two main steps in intersex operations. "The first treatments for clitoromegaly were simply resection of the clitoris. Later, it was understood that the clitoris glans and sensory input are important to facilitate orgasm", stated Atilla. The clitoral glans' epithelium "has high cutaneous sensitivity, which is important in sexual responses", and it is because of this that "recession clitoroplasty was later devised as an alternative, but reduction clitoroplasty is the method currently performed". What is often referred to as a "clitoris piercing" is the more common (and significantly less complicated) clitoral hood piercing. Since piercing the clitoris is difficult and very painful, piercing the clitoral hood is more common than piercing the clitoral shaft or glans, owing to the small percentage of people who are anatomically suited for it. Clitoral hood piercings are usually channeled in the form of vertical piercings, and, to a lesser extent, horizontal piercings. The triangle piercing is a very deep horizontal hood piercing and is done behind the clitoris as opposed to in front of it. For styles such as the Isabella piercing, which passes through the clitoral shaft but is placed deep at the base, they provide unique stimulation and still require the proper genital build. The Isabella starts between the clitoral glans and the urethra, exiting at the top of the clitoral hood; this piercing is highly risky concerning the damage that may occur because of intersecting nerves. (See Clitoral index.) Sexual disorders Persistent genital arousal disorder (PGAD) results in spontaneous, persistent, and uncontrollable genital arousal in women, unrelated to any feelings of sexual desire. Clitoral priapism is a rare, potentially painful medical condition and is sometimes described as an aspect of PGAD. With PGAD, arousal lasts for an unusually extended period (ranging from hours to days); it can also be associated with morphometric and vascular modifications of the clitoris. Drugs may cause or affect clitoral priapism. The drug trazodone is known to cause male priapism as a side effect, but there is only one documented report that it may have caused clitoral priapism, in which case discontinuing the medication may be a remedy. Additionally, nefazodone is documented to have caused clitoral engorgement, as distinct from clitoral priapism, in one case, and clitoral priapism can sometimes start as a result of, or only after, the discontinuation of antipsychotics or selective serotonin reuptake inhibitors (SSRIs). Because PGAD is relatively rare and, as its concept apart from clitoral priapism, has only been researched since 2001, there is little research into what may cure or remedy the disorder. In some recorded cases, PGAD was caused by or caused, a pelvic arterial-venous malformation with arterial branches to the clitoris; surgical treatment was effective in these cases. In 2022, an article in The New York Times reported several instances of women experiencing reduced clitoral sensitivity or inability to orgasm following various surgical procedures, including biopsies of the vulva, pelvic mesh surgeries (sling surgeries), and labiaplasties. The Times quoted several researchers who suggest that surgeons' lack of training in clitoral anatomy and nerve distribution may have been a factor. As it is part of the vulva, the clitoris is susceptible to pain (clitorodynia) from various conditions such as sexually transmitted infections and pudendal nerve entrapment. The clitoris may also be affected by vulvar cancer, although at a much lower rate. Clitoral phimosis (or clitoral adhesions) is when the prepuce cannot be retracted, limiting exposure of the glans. Smegma The secretion of smegma (smegma clitoridis) comes from the apocrine glands of the clitoris (sweat), the sebaceous glands of the clitoris (sebum) and desquamating epithelial cells. Society and culture Ancient Greek–16th century knowledge and vernacular Concerning historical and modern perceptions of the clitoris, the clitoris and the penis were considered equivalent by some scholars for more than 2,500 years in all respects except their arrangement. Due to it being frequently omitted from, or misrepresented in, historical and contemporary anatomical texts, it was also subject to a continual cycle of male scholars claiming to have discovered it. The ancient Greeks, ancient Romans, and Greek and Roman generations up to and throughout the Renaissance, were aware that male and female sex organs are anatomically similar, but prominent anatomists such as Galen and Vesalius regarded the vagina as the structural equivalent of the penis, except for being inverted; Vesalius argued against the existence of the clitoris in normal women, and his anatomical model described how the penis corresponds with the vagina, without a role for the clitoris. Ancient Greek and Roman sexuality additionally designated penetration as "male-defined" sexuality. The term tribas, or , was used to refer to a woman or intersex individual who actively penetrated another person (male or female) through the use of the clitoris or a dildo. As any sexual act was believed to require that one of the partners be "phallic" and that therefore sexual activity between women was impossible without this feature, mythology popularly associated lesbians with either having enlarged clitorises or as incapable of enjoying sexual activity without the substitution of a phallus. In 1545, Charles Estienne was the first writer to identify the clitoris in a work based on dissection, but he concluded that it had a urinary function. Following this study, Realdo Colombo (also known as Renaldus Columbus), a lecturer in surgery at the University of Padua, Italy, published a book called Dere anatomica in 1559, in which he describes the "seat of woman's delight". In his role as researcher, Colombo concluded, "Since no one has discerned these projections and their workings, if it is permissible to give names to things discovered by me, it should be called the love or sweetness of Venus.", about the mythological Venus, goddess of erotic love. Colombo's claim was disputed by his successor at Padua, Gabriele Falloppio (discoverer of the fallopian tube), who claimed that he was the first to discover the clitoris. In 1561, Falloppio stated, "Modern anatomists have entirely neglected it ... and do not say a word about it ... and if others have spoken of it, know that they have taken it from me or my students". This caused an upset in the European medical community, and, having read Colombo's and Falloppio's detailed descriptions of the clitoris, Vesalius stated, "It is unreasonable to blame others for incompetence on the basis of some sport of nature you have observed in some women and you can hardly ascribe this new and useless part, as if it were an organ, to healthy women". He concluded, "I think that such a structure appears in hermaphrodites who otherwise have well-formed genitals, as Paul of Aegina describes, but I have never once seen in any woman a penis (which Avicenna called albaratha and the Greeks called an enlarged nympha and classed as an illness) or even the rudiments of a tiny phallus". The average anatomist had difficulty challenging Galen's or Vesalius' research; Galen was the most famous physician of the Greek era and his works were considered the standard of medical understanding up to and throughout the Renaissance (i.e. for almost two thousand years), and various terms being used to describe the clitoris seemed to have further confused the issue of its structure. In addition to Avicenna's naming it the albaratha or virga ("rod") and Colombo's calling it the sweetness of Venus, Hippocrates used the term columella ("little pillar"), and Albucasis, an Arabic medical authority, named it tentigo ("tension"). The names indicated that each description of the structures was about the body and glans of the clitoris but usually the glans. It was additionally known to the Romans, who named it (vulgar slang) landica. However, Albertus Magnus, one of the most prolific writers of the Middle Ages, felt that it was important to highlight "homologies between male and female structures and function" by adding "a psychology of sexual arousal" that Aristotle had not used to detail the clitoris. While in Constantine's treatise Liber de Coitu, the clitoris is referred to a few times, Magnus gave an equal amount of attention to male and female organs. Like Avicenna, Magnus also used the word virga for the clitoris, but employed it for the male and female genitals; despite his efforts to give equal ground to the clitoris, the cycle of suppression and rediscovery of the organ continued, and a 16th-century justification for clitoridectomy appears to have been confused with intersex conditions and the imprecision created by the word nymphae substituted for the word clitoris. Nymphotomy was a medical operation to excise an unusually large clitoris, but what was considered "unusually large" was often a matter of perception. The procedure was routinely performed on Egyptian women, due to physicians such as Jacques Daléchamps who believed that this version of the clitoris was "an unusual feature that occurred in almost all Egyptian women [and] some of ours, so that when they find themselves in the company of other women, or their clothes rub them while they walk or their husbands wish to approach them, it erects like a male penis and indeed they use it to play with other women, as their husbands would do ... Thus the parts are cut". 17th century–present day knowledge and vernacular Caspar Bartholin (whom Bartholin's glands are named after), a 17th-century Danish anatomist, dismissed Colombo's and Falloppio's claims that they discovered the clitoris, arguing that the clitoris had been widely known to medical science since the second century. Although 17th-century midwives recommended to men and women that women should aspire to achieve orgasms to help them get pregnant for general health and well-being and to keep their relationships healthy, debate about the importance of the clitoris persisted, notably in the work of Regnier de Graaf in the 17th century and Georg Ludwig Kobelt in the 19th. Like Falloppio and Bartholin, de Graaf criticized Colombo's claim of having discovered the clitoris; his work appears to have provided the first comprehensive account of clitoral anatomy. "We are extremely surprised that some anatomists make no more mention of this part than if it did not exist at all in the universe of nature", he stated. "In every cadaver, we have so far dissected we have found it quite perceptible to sight and touch". De Graaf stressed the need to distinguish from , choosing to "always give [the clitoris] the name clitoris" to avoid confusion; this resulted in the frequent use of the correct name for the organ among anatomists, but considering that was also varied in its use and eventually became the term specific to the labia minora, more confusion ensued. Debate about whether orgasm was even necessary for women began in the Victorian era, and Freud's 1905 theory about the immaturity of clitoral orgasms (see above) negatively affected women's sexuality throughout most of the 20th century. Toward the end of World War I, a maverick BritishMP named Noel Pemberton Billing published an article entitled "The Cult of the Clitoris", furthering his conspiracy theories and attacking the actress Maud Allan and Margot Asquith, wife of the prime minister. The accusations led to a sensational libel trial, which Billing eventually won; Philip Hoare reports that Billing argued that "as a medical term, 'clitoris' would only be known to the 'initiated', and was incapable of corrupting moral minds". Jodie Medd argues regarding "The Cult of the Clitoris" that "the female non-reproductive but desiring body [...] simultaneously demands and refuses interpretative attention, inciting scandal through its very resistance to representation". From the 18th to the 20th century, especially during the 20th, details of the clitoris from various genital diagrams presented in earlier centuries were omitted from later texts. The full extent of the clitoris was alluded to by Masters and Johnson in 1966, but in such a muddled fashion that the significance of their description became obscured; in 1981, the Federation of Feminist Women's Health Clinics (FFWHC) continued this process with anatomically precise illustrations identifying 18 structures of the clitoris. Despite the FFWHC's illustrations, Josephine Lowndes Sevely, in 1987, described the vagina as more of the counterpart of the penis. Concerning other beliefs about the clitoris, Hite (1976 and 1981) found that, during sexual intimacy with a partner, clitoral stimulation was more often described by women as foreplay than as a primary method of sexual activity, including orgasm. Further, although the FFWHC's work significantly propelled feminist reformation of anatomical texts, it did not have a general impact. Helen O'Connell's late 1990s research motivated the medical community to start changing the way the clitoris is anatomically defined. O'Connell describes typical textbook descriptions of the clitoris as lacking detail and including inaccuracies, such as older and modern anatomical descriptions of the female human urethral and genital anatomy having been based on dissections performed on elderly cadavers whose erectile (clitoral) tissue had shrunk. She instead credits the work of Georg Ludwig Kobelt as the most comprehensive and accurate description of clitoral anatomy. MRI measurements, which provide a live and multi-planar method of examination, now complement the FFWHC's, as well as O'Connell's, research efforts concerning the clitoris, showing that the volume of clitoral erectile tissue is ten times that which is shown in doctors' offices and anatomy textbooks. In Bruce Bagemihl's survey of The Zoological Record (1978–1997)which contains over a million documents from over 6,000 scientific journals539 articles focusing on the penis were found, while seven were found focusing on the clitoris. In 2000, researchers Shirley Ogletree and Harvey Ginsberg concluded that there is a general neglect of the word in the common vernacular. They looked at the terms used to describe genitalia in the PsycINFO database from 1887 to 2000 and found that was used in 1,482 sources, in 409, while was only mentioned in 83. They additionally analyzed 57 books listed in a computer database for sex instruction. In the majority of the books, was the most commonly discussed body partmentioned more than , , and put together. They last investigated terminology used by college students, ranging from Euro-American (76%/76%), Hispanic (18%/14%), and African American (4%/7%), regarding the students' beliefs about sexuality and knowledge on the subject. The students were overwhelmingly educated to believe that the vagina is the female counterpart of the penis. The authors found that the student's belief that the inner portion of the vagina is the most sexually sensitive part of the female body correlated with negative attitudes toward masturbation and strong support for sexual myths. A study in 2005 reported that, among a sample of undergraduate students, the most frequently cited sources for knowledge about the clitoris were school and friends, and that this was associated with the least tested knowledge. Knowledge of the clitoris by self-exploration was the least cited, but "respondents correctly answered, on average, three of the five clitoral knowledge measures". The authors stated that "[k]nowledge correlated significantly with the frequency of women's orgasm in masturbation but not partnered sex" and that their "results are discussed in light of gender inequality and a social construction of sexuality, endorsed by both men and women, that privileges men's sexual pleasure over women's, such that orgasm for women is pleasing but ultimately incidental". They concluded that part of the solution to remedying "this problem" requires that males and females are taught more about the clitoris than is currently practiced. The humanitarian group Clitoraid launched the first annual International Clitoris Awareness Week, from 6to12 May in 2015. Clitoraid spokesperson Nadine Gary stated that the group's mission is to raise public awareness about the clitoris because it has "been ignored, vilified, made taboo, and considered sinful and shameful for centuries". (
Biology and health sciences
Reproductive system
null
6910
https://en.wikipedia.org/wiki/Cloning
Cloning
Cloning is the process of producing individual organisms with identical genomes, either by natural or artificial means. In nature, some organisms produce clones through asexual reproduction; this reproduction of an organism by itself without a mate is known as parthenogenesis. In the field of biotechnology, cloning is the process of creating cloned organisms of cells and of DNA fragments. The artificial cloning of organisms, sometimes known as reproductive cloning, is often accomplished via somatic-cell nuclear transfer (SCNT), a cloning method in which a viable embryo is created from a somatic cell and an egg cell. In 1996, Dolly the sheep achieved notoriety for being the first mammal cloned from a somatic cell. Another example of artificial cloning is molecular cloning, a technique in molecular biology in which a single living cell is used to clone a large population of cells that contain identical DNA molecules. In bioethics, there are a variety of ethical positions regarding the practice and possibilities of cloning. The use of embryonic stem cells, which can be produced through SCNT, in some stem cell research has attracted controversy. Cloning has been proposed as a means of reviving extinct species. In popular culture, the concept of cloning—particularly human cloning—is often depicted in science fiction; depictions commonly involve themes related to identity, the recreation of historical figures or extinct species, or cloning for exploitation (e.g. cloning soldiers for warfare). Etymology Coined by Herbert J. Webber, the term clone derives from the Ancient Greek word (), twig, which is the process whereby a new plant is created from a twig. In botany, the term lusus was used. In horticulture, the spelling clon was used until the early twentieth century; the final e came into use to indicate the vowel is a "long o" instead of a "short o". Since the term entered the popular lexicon in a more general context, the spelling clone has been used exclusively. Natural cloning Natural cloning is the production of clones without the involvement of genetic engineering techniques or human intervention (i.e. artificial cloning). Natural cloning occurs through a variety of natural mechanisms, from single-celled organisms to complex multicellular organisms, and has allowed life forms to spread for hundreds of millions of years. Versions of this reproduction method are used by plants, fungi, and bacteria, and is also the way that clonal colonies reproduce themselves. Some of the mechanisms are explored and used in plants and animals are binary fission, budding, fragmentation, and parthenogenesis. It can also occur during some forms of asexual reproduction, when a single parent organism produces genetically identical offspring by itself. Many plants are well known for natural cloning ability, including blueberry plants, Hazel trees, the Pando trees, the Kentucky coffeetree, Myrica, and the American sweetgum. It also occurs accidentally in the case of identical twins, which are formed when a fertilized egg splits, creating two or more embryos that carry identical DNA. Molecular cloning Molecular cloning refers to the process of making multiple molecules. Cloning is commonly used to amplify DNA fragments containing whole genes, but it can also be used to amplify any DNA sequence such as promoters, non-coding sequences and randomly fragmented DNA. It is used in a wide array of biological experiments and practical applications ranging from genetic fingerprinting to large scale protein production. Occasionally, the term cloning is misleadingly used to refer to the identification of the chromosomal location of a gene associated with a particular phenotype of interest, such as in positional cloning. In practice, localization of the gene to a chromosome or genomic region does not necessarily enable one to isolate or amplify the relevant genomic sequence. To amplify any DNA sequence in a living organism, that sequence must be linked to an origin of replication, which is a sequence of DNA capable of directing the propagation of itself and any linked sequence. However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single-stranded RNA or DNA production and a host of other molecular biology tools. Cloning of any DNA fragment essentially involves four steps fragmentation - breaking apart a strand of DNA ligation – gluing together pieces of DNA in a desired sequence transfection – inserting the newly formed pieces of DNA into cells screening/selection – selecting out the cells that were successfully transfected with the new DNA Although these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a cloning strategy. Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation, the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitisation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing. Cell cloning Cloning unicellular organisms Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media. A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders). In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth. Cloning stem cells Somatic-cell nuclear transfer, popularly known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning". The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source. Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned. In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient. The embryo will then form a blastocyst which has the potential to form/become any cell in the body. The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the donor's somatic cell nucleus is inserted into the oocyte. The oocyte will react to the somatic cell nucleus, the same way it would to a sperm cell's nucleus. The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use. The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this has been done, the somatic nucleus can be inserted into an egg cytoplasm. This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current. This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals. SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct. However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood. As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day. In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. Organism cloning Organism cloning (also called reproductive cloning) refers to the procedure of creating a new multicellular organism, genetically identical to another. In essence this form of cloning is an asexual method of reproduction, where fertilization or inter-gamete contact does not take place. Asexual reproduction is a naturally occurring phenomenon in many species, including most plants and some insects. Scientists have made some major achievements with cloning, including the asexual reproduction of sheep and cows. There is a lot of ethical debate over whether or not cloning should be used. However, cloning, or asexual propagation, has been common practice in the horticultural world for hundreds of years. Horticultural The term clone is used in horticulture to refer to descendants of a single plant which were produced by vegetative reproduction or apomixis. Many horticultural plant cultivars are clones, having been derived from a single individual, multiplied by some process other than sexual reproduction. As an example, some European cultivars of grapes represent clones that have been propagated for over two millennia. Other examples are potato and banana. Grafting can be regarded as cloning, since all the shoots and branches coming from the graft are genetically a clone of a single individual, but this particular kind of cloning has not come under ethical scrutiny and is generally treated as an entirely different kind of operation. Many trees, shrubs, vines, ferns and other herbaceous perennials form clonal colonies naturally. Parts of an individual plant may become detached by fragmentation and grow on to become separate clonal individuals. A common example is in the vegetative reproduction of moss and liverwort gametophyte clones by means of gemmae. Some vascular plants e.g. dandelion and certain viviparous grasses also form seeds asexually, termed apomixis, resulting in clonal populations of genetically identical individuals. Parthenogenesis Clonal derivation exists in nature in some animal species and is referred to as parthenogenesis (reproduction of an organism by itself without a mate). This is an asexual form of reproduction that is only found in females of some insects, crustaceans, nematodes, fish (for example the hammerhead shark), Cape honeybees, and lizards including the Komodo dragon and several whiptails. The growth and development occurs without fertilization by a male. In plants, parthenogenesis means the development of an embryo from an unfertilized egg cell, and is a component process of apomixis. In species that use the XY sex-determination system, the offspring will always be female. An example is the little fire ant (Wasmannia auropunctata), which is native to Central and South America but has spread throughout many tropical environments. Artificial cloning of organisms Artificial cloning of organisms may also be called reproductive cloning. First steps Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1924 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos – one of the first steps towards cloning. Methods Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from a blastocyst from which the nucleus has been removed. If the egg begins to divide normally it is transferred into the uterus of the surrogate mother. Such clones are not strictly identical since the somatic cells may contain mutations in their nuclear DNA. Additionally, the mitochondria in the cytoplasm also contains DNA and during SCNT this mitochondrial DNA is wholly from the cytoplasmic donor's egg, thus the mitochondrial genome is not the same as that of the nucleus donor cell from which it was produced. This may have important implications for cross-species nuclear transfer in which nuclear-mitochondrial incompatibilities may lead to death. Artificial embryo splitting or embryo twinning, a technique that creates monozygotic twins from a single embryo, is not considered in the same fashion as other methods of cloning. During that procedure, a donor embryo is split in two distinct embryos, that can then be transferred via embryo transfer. It is optimally performed at the 6- to 8-cell stage, where it can be used as an expansion of IVF to increase the number of available embryos. If both embryos are successful, it gives rise to monozygotic (identical) twins. Dolly the sheep Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult somatic cell. Dolly was formed by taking a cell from the udder of her 6-year-old biological mother. Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 435 attempts before an embryo was successful. The embryo was then placed inside a female sheep that went through a normal pregnancy. She was cloned at the Roslin Institute in Scotland by British scientists Sir Ian Wilmut and Keith Campbell and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997. Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland. Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, designed to express only a distinct subset of its genes, can be redesigned to grow an entirely new organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg. However, this concept was not yet demonstrated in a mammalian system. The first mammalian cloning (resulting in Dolly) had a success rate of 29 embryos per 277 fertilized eggs, which produced three lambs at birth, one of which lived. In a bovine experiment involving 70 cloned calves, one-third of the calves died quite young. The first successfully cloned horse, Prometea, took 814 attempts. Notably, although the first clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell. There were early claims that Dolly had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to problems with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice. Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage. Species cloned and applications The modern cloning techniques involving nuclear transfer have been successfully performed on several species. Notable experiments include: Tadpole: (1952) Robert Briggs and Thomas J. King successfully cloned northern leopard frogs: thirty-five complete embryos and twenty-seven tadpoles from one-hundred and four successful nuclear transfers. Carp: (1963) In China, embryologist Tong Dizhou produced the world's first cloned fish by inserting the DNA from a cell of a male carp into an egg from a female carp. He published the findings in a Chinese science journal. Zebrafish: (1981) George Streisinger produced the first cloned vertebrate. Sheep: (1984) Steen Willadsen produced the first cloned mammal from early embryonic cells. In June 1995, the Roslin Institute cloned Megan and Morag from differentiated embryonic cells. In July 1996, PPL Therapeutics and the Roslin Institute cloned Dolly the sheep from a somatic cell. Mouse: (1986) A mouse was successfully cloned from an early embryonic cell. In 1987, Soviet scientists Levon Chaylakhyan, Veprencev, Sviridova, and Nikitin cloned Masha, a mouse. Rhesus monkey: (October 1999) The Oregon National Primate Research Center cloned Tetra from embryo splitting and not nuclear transfer: a process more akin to artificial formation of twins. Pig: (March 2000) PPL Therapeutics cloned five piglets. By 2014, BGI in China was producing 500 cloned pigs a year to test new medicines. Gaur: (2001) was the first endangered species cloned. Cattle: Alpha and Beta (males, 2001) and (2005), Brazil In 2023, Chinese scientists reported the cloning of three supercows with a milk productivity "nearly 1.7 times the amount of milk an average cow in the United States produced in 2021" and a plan for 1,000 of such super cows in the near-term. According to a news report "[i]n many countries, including the United States, farmers breed clones with conventional animals to add desirable traits, such as high milk production or disease resistance, into the gene pool". Cat: CopyCat "CC" (female, late 2001), Little Nicky, 2004, was the first cat cloned for commercial reasons Rat: Ralph, the first cloned rat (2003) Mule: Idaho Gem, a john mule born 4 May 2003, was the first horse-family clone. Horse: Prometea, a Haflinger female born 28 May 2003, was the first horse clone. Przewalksi's Horse: An ongoing cloning program by the San Diego Zoo Wildlife Alliance and Revive & Restore attempts to reintroduce genetic diversity to this endangered species. Kurt, the first cloned Przewalski's horse, was born in 2020. He was cloned from the skin tissue of a stallion which was preserved in 1980. "Trey" was born in 2023. He was cloned from the same stallion's tissue as Kurt. Dog: Snuppy, a male Afghan hound was the first cloned dog (2005). In 2017, the world's first gene-editing clone dog, Apple, was created by Sinogene Biotechnology. Sooam Biotech, South Korea, was reported in 2015 to have cloned 700 dogs to date for their owners, including two Yakutian Laika hunting dogs, which are seriously endangered due to crossbreeding. Cloning of super sniffer dogs was reported in 2011, four years afterwards when the dogs started working. Cloning of a successful rescue dog was also reported in 2009 and of a similar police dog in 2019. Cancer-sniffing dogs have also been cloned. A review concluded that "qualified elite working dogs can be produced by cloning a working dog that exhibits both an appropriate temperament and good health." Wolf: Snuwolf and Snuwolffy, the first two cloned female wolves (2005). Water buffalo: Samrupa was the first cloned water buffalo. It was born on 6 February 2009, at India's Karnal National Dairy Research Institute but died five days later due to lung infection. Pyrenean ibex: (2009) was the first extinct animal to be cloned back to life; the clone lived for seven minutes before dying of lung defects. The extinct Pyrenean ibex is a sub-species of the still-thriving Spanish ibex. Camel: (2009) Injaz, was the first cloned camel. Pashmina goat: (2012) Noori, is the first cloned pashmina goat. Scientists at the faculty of veterinary sciences and animal husbandry of Sher-e-Kashmir University of Agricultural Sciences and Technology of Kashmir successfully cloned the first Pashmina goat (Noori) using the advanced reproductive techniques under the leadership of Riaz Ahmad Shah. Goat: (2001) Scientists of Northwest A&F University successfully cloned the first goat which use the adult female cell. Gastric brooding frog: (2013) The gastric brooding frog, Rheobatrachus silus, thought to have been extinct since 1983 was cloned in Australia, although the embryos died after a few days. Macaque monkey: (2017) First successful cloning of a primate species using nuclear transfer, with the birth of two live clones named Zhong Zhong and Hua Hua. Conducted in China in 2017, and reported in January 2018. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua and Dolly the sheep, and the gene-editing Crispr-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made to study several medical diseases. Black-footed ferret: (2020) A team of scientists cloned a female named Willa, who died in the mid-1980s and left no living descendants. Her clone, a female named Elizabeth Ann, was born on 10 December. Scientists hope that the contribution of this individual will alleviate the effects of inbreeding and help black-footed ferrets better cope with plague. Experts estimate that this female's genome contains three times as much genetic diversity as any of the modern black-footed ferrets. First artificial parthenogenesis in mammals: (2022) Viable mice offspring was born from unfertilized eggs via targeted DNA methylation editing of seven imprinting control regions. Human cloning Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning. Two commonly discussed types of theoretical human cloning are therapeutic cloning and reproductive cloning. Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, . Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues. Ethical issues of cloning There are a variety of ethical positions regarding the possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production. Advocates support development of therapeutic cloning to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. Opponents of cloning have concerns that technology is not yet developed enough to be safe and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested), as well as concerns about how cloned individuals could integrate with families and with society at large. Cloning humans could lead to serious violations of human rights. Religious groups are divided, with some opposing the technology as usurping "God's place" and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits. There is at least one religion, Raëlism, in which cloning plays a major role. Contemporary work on this topic is concerned with the ethics, adequate regulation and issues of any cloning carried out by humans, not potentially by extraterrestrials (including in the future), and largely also not replication – also described as mind cloning – of potential whole brain emulations. Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while food from cloned animals has been approved as safe by the US FDA, its use is opposed by groups concerned about food safety. In practical terms, the inclusion of "licensing requirements for embryo research projects and fertility clinics, restrictions on the commodification of eggs and sperm, and measures to prevent proprietary interests from monopolizing access to stem cell lines" in international cloning regulations has been proposed, albeit e.g. effective oversight mechanisms or cloning requirements have not been described. Cloning extinct and endangered species Cloning, or more precisely, the reconstruction of functional DNA from extinct species has, for decades, been a dream. Possible implications of this were dramatized in the 1984 novel Carnosaur and the 1990 novel Jurassic Park. The best current cloning techniques have an average success rate of 9.4 percent (and as high as 25 percent) when working with familiar species such as mice, while cloning wild animals is usually less than 1 percent successful. Conservation cloning Several tissue banks have come into existence, including the "Frozen zoo" at the San Diego Zoo, to store frozen tissue from the world's rarest and most endangered species. This is also referred to as "Conservation cloning". Engineers have proposed a "lunar ark" in 2021 – storing millions of seed, spore, sperm and egg samples from Earth's contemporary species in a network of lava tubes on the Moon as a genetic backup. Similar proposals have been made since at least 2008. These also include sending human customer DNA, and a proposal for "a lunar backup record of humanity" that includes genetic information by Avi Loeb et al. Scientists at the University of Newcastle and University of New South Wales announced in March 2013 that the very recently extinct gastric-brooding frog would be the subject of a cloning attempt to resurrect the species. Many such "De-extinction" projects are being championed by the non-profit Revive & Restore. De-extinction One of the most anticipated targets for cloning was once the woolly mammoth, but attempts to extract DNA from frozen mammoths have been unsuccessful, though a joint Russo-Japanese team is currently working toward this goal. In January 2011, it was reported by Yomiuri Shimbun that a team of scientists headed by Akira Iritani of Kyoto University had built upon research by Dr. Wakayama, saying that they will extract DNA from a mammoth carcass that had been preserved in a Russian laboratory and insert it into the egg cells of an Asian elephant in hopes of producing a mammoth embryo. The researchers said they hoped to produce a baby mammoth within six years. The challenges are formidable. Extensively degraded DNA that may be suitable for sequencing may not be suitable for cloning; it would have to be synthetically reconstituted. In any case, with currently available technology, DNA alone is not suitable for mammalian cloning; intact viable cell nuclei are required. Patching pieces of reconstituted mammoth DNA into an Asian elephant cell nucleus would result in an elephant-mammoth hybrid rather than a true mammoth. Moreover, true de-extinction of the wooly mammoth species would require a breeding population, which would require cloning of multiple genetically distinct but reproductively compatible individuals, multiplying both the amount of work and the uncertainties involved in the project. There are potentially other post-cloning problems associated with the survival of a reconstructed mammoth, such as the requirement of ruminants for specific symbiotic microbiota in their stomachs for digestion. In 2022, scientists showed major limitations and the scale of challenge of genetic-editing-based de-extinction, suggesting resources spent on more comprehensive de-extinction projects such as of the woolly mammoth may currently not be well allocated and substantially limited. Their analyses "show that even when the extremely high-quality Norway brown rat (R. norvegicus) is used as a reference, nearly 5% of the genome sequence is unrecoverable, with 1,661 genes recovered at lower than 90% completeness, and 26 completely absent", complicated further by that "distribution of regions affected is not random, but for example, if 90% completeness is used as the cutoff, genes related to immune response and olfaction are excessively affected" due to which "a reconstructed Christmas Island rat would lack attributes likely critical to surviving in its natural or natural-like environment". In a 2021 online session of the Russian Geographical Society, Russia's defense minister Sergei Shoigu mentioned using the DNA of 3,000-year-old Scythian warriors to potentially bring them back to life. The idea was described as absurd at least at this point in news reports and it was noted that Scythians likely weren't skilled warriors by default. The idea of cloning Neanderthals or bringing them back to life in general is controversial but some scientists have stated that it may be possible in the future and have outlined several issues or problems with such as well as broad rationales for doing so. Unsuccessful attempts In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from the last bucardo (Pyrenean ibex) were frozen in liquid nitrogen immediately after it died in 2000. Researchers are also considering cloning endangered species such as the Giant panda and Cheetah. In 2002, geneticists at the Australian Museum announced that they had replicated DNA of the thylacine (Tasmanian tiger), at the time extinct for about 65 years, using polymerase chain reaction. However, on 15 February 2005 the museum announced that it was stopping the project after tests showed the specimens' DNA had been too badly degraded by the (ethanol) preservative. On 15 May 2005 it was announced that the thylacine project would be revived, with new participation from researchers in New South Wales and Victoria. In 2003, for the first time, an extinct animal, the Pyrenean ibex mentioned above was cloned, at the Centre of Food Technology and Research of Aragon, using the preserved frozen cell nucleus of the skin samples from 2001 and domestic goat egg-cells. The ibex died shortly after birth due to physical defects in its lungs. Lifespan After an eight-year project involving the use of a pioneering cloning technique, Japanese researchers created 25 generations of healthy cloned mice with normal lifespans, demonstrating that clones are not intrinsically shorter-lived than naturally born animals. Other sources have noted that the offspring of clones tend to be healthier than the original clones and indistinguishable from animals produced naturally. Some posited that Dolly the sheep may have aged more quickly than naturally born animals, as she died relatively early for a sheep at the age of six. Ultimately, her death was attributed to a respiratory illness, and the "advanced aging" theory is disputed. A 2016 study indicated that once cloned animals survive the first month or two of life they are generally healthy. However, early pregnancy loss and neonatal losses are still greater with cloning than natural conception or assisted reproduction (IVF). Current research is attempting to overcome these problems. In popular culture Discussion of cloning in the popular media often presents the subject negatively. In an article in the 8 November 1993 article of Time, cloning was portrayed in a negative way, modifying Michelangelo's Creation of Adam to depict Adam with five identical hands. Newsweek 10 March 1997 issue also critiqued the ethics of human cloning, and included a graphic depicting identical babies in beakers. The concept of cloning, particularly human cloning, has featured a wide variety of science fiction works. An early fictional depiction of cloning is Bokanovsky's Process which features in Aldous Huxley's 1931 dystopian novel Brave New World. The process is applied to fertilized human eggs in vitro, causing them to split into identical genetic copies of the original. Following renewed interest in cloning in the 1950s, the subject was explored further in works such as Poul Anderson's 1953 story UN-Man, which describes a technology called "exogenesis", and Gordon Rattray Taylor's book The Biological Time Bomb, which popularised the term "cloning" in 1963. Cloning is a recurring theme in a number of contemporary science fiction films, ranging from action films such as Anna to the Infinite Power, The Boys from Brazil, Jurassic Park (1993), Alien Resurrection (1997), The 6th Day (2000), Resident Evil (2002), Star Wars: Episode II – Attack of the Clones (2002), The Island (2005), Tales of the Abyss (2006), and Moon (2009) to comedies such as Woody Allen's 1973 film Sleeper. The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series Doctor Who, the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples ("The Invisible Enemy", 1977) and then – in an apparent homage to the 1966 film Fantastic Voyage – shrunk to microscopic size to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Science fiction films such as The Matrix and Star Wars: Episode II – Attack of the Clones have featured scenes of human foetuses being cultured on an industrial scale in mechanical tanks. Cloning humans from body parts is also a common theme in science fiction. Cloning features strongly among the science fiction conventions parodied in Woody Allen's Sleeper, the plot of which centres around an attempt to clone an assassinated dictator from his disembodied nose. In the 2008 Doctor Who story "Journey's End", a duplicate version of the Tenth Doctor spontaneously grows from his severed hand, which had been cut off in a sword fight during an earlier episode. After the death of her beloved 14-year-old Coton de Tulear named Samantha in late 2017, Barbra Streisand announced that she had cloned the dog, and was now "waiting for [the two cloned pups] to get older so [she] can see if they have [Samantha's] brown eyes and her seriousness". The operation cost $50,000 through the pet cloning company ViaGen. In films such as Roger Spottiswoode's 2000 The 6th Day, which makes use of the trope of a "vast clandestine laboratory ... filled with row upon row of 'blank' human bodies kept floating in tanks of nutrient liquid or in suspended animation", clearly fear is to be incited. In Clark's view, the biotechnology is typically "given fantastic but visually arresting forms" while the science is either relegated to the background or fictionalised to suit a young audience. Genetic engineering methods are weakly represented in film; Michael Clark, writing for The Wellcome Trust, calls the portrayal of genetic engineering and biotechnology "seriously distorted" Cloning and identity Science fiction has used cloning, most commonly and specifically human cloning, to raise questions of identity. A Number is a 2002 play by English playwright Caryl Churchill which addresses the subject of human cloning and identity, especially nature and nurture. The story, set in the near future, is structured around the conflict between a father (Salter) and his sons (Bernard 1, Bernard 2, and Michael Black) – two of whom are clones of the first one. A Number was adapted by Caryl Churchill for television, in a co-production between the BBC and HBO Films. In 2012, a Japanese television series named "Bunshin" was created. The story's main character, Mariko, is a woman studying child welfare in Hokkaido. She grew up always doubtful about the love from her mother, who looked nothing like her and who died nine years before. One day, she finds some of her mother's belongings at a relative's house, and heads to Tokyo to seek out the truth behind her birth. She later discovered that she was a clone. In the 2013 television series Orphan Black, cloning is used as a scientific study on the behavioral adaptation of the clones. In a similar vein, the book The Double by Nobel Prize winner José Saramago explores the emotional experience of a man who discovers that he is a clone. Cloning as resurrection Cloning has been used in fiction as a way of recreating historical figures. In the 1976 Ira Levin novel The Boys from Brazil and its 1978 film adaptation, Josef Mengele uses cloning to create copies of Adolf Hitler. In Michael Crichton's 1990 novel Jurassic Park, which spawned a series of Jurassic Park feature films, the bioengineering company InGen develops a technique to resurrect extinct species of dinosaurs by creating cloned creatures using DNA extracted from fossils. The cloned dinosaurs are used to populate the Jurassic Park wildlife park for the entertainment of visitors. The scheme goes disastrously wrong when the dinosaurs escape their enclosures. Despite being selectively cloned as females to prevent them from breeding, the dinosaurs develop the ability to reproduce through parthenogenesis. Cloning for warfare The use of cloning for military purposes has also been explored in several fictional works. In Doctor Who, an alien race of armour-clad, warlike beings called Sontarans was introduced in the 1973 serial "The Time Warrior". Sontarans are depicted as squat, bald creatures who have been genetically engineered for combat. Their weak spot is a "probic vent", a small socket at the back of their neck which is associated with the cloning process. The concept of cloned soldiers being bred for combat was revisited in "The Doctor's Daughter" (2008), when the Doctor's DNA is used to create a female warrior called Jenny. The 1977 film Star Wars was set against the backdrop of a historical conflict called the Clone Wars. The events of this war were not fully explored until the prequel films Attack of the Clones (2002) and Revenge of the Sith (2005), which depict a space war waged by a massive army of heavily armoured clone troopers that leads to the foundation of the Galactic Empire. Cloned soldiers are "manufactured" on an industrial scale, genetically conditioned for obedience and combat effectiveness. It is also revealed that the popular character Boba Fett originated as a clone of Jango Fett, a mercenary who served as the genetic template for the clone troopers. Cloning for exploitation A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. The 2005 Kazuo Ishiguro novel Never Let Me Go and the 2010 film adaption are set in an alternate history in which cloned humans are created for the sole purpose of providing organ donations to naturally born humans, despite the fact that they are fully sentient and self-aware. The 2005 film The Island revolves around a similar plot, with the exception that the clones are unaware of the reason for their existence. The exploitation of human clones for dangerous and undesirable work was examined in the 2009 British science fiction film Moon. In the futuristic novel Cloud Atlas and subsequent film, one of the story lines focuses on a genetically engineered fabricant clone named Sonmi~451, one of millions raised in an artificial "wombtank", destined to serve from birth. She is one of thousands created for manual and emotional labor; Sonmi herself works as a server in a restaurant. She later discovers that the sole source of food for clones, called 'Soap', is manufactured from the clones themselves. In the film Us, at some point prior to the 1980s, the US Government creates clones of every citizen of the United States with the intention of using them to control their original counterparts, akin to voodoo dolls. This fails, as they were able to copy bodies, but unable to copy the souls of those they cloned. The project is abandoned and the clones are trapped exactly mirroring their above-ground counterparts' actions for generations. In the present day, the clones launch a surprise attack and manage to complete a mass-genocide of their unaware counterparts.
Technology
Biotechnology
null
6911
https://en.wikipedia.org/wiki/Cellulose
Cellulose
Cellulose is an organic compound with the formula , a polysaccharide consisting of a linear chain of several hundred to many thousands of β(1→4) linked D-glucose units. Cellulose is an important structural component of the primary cell wall of green plants, many forms of algae and the oomycetes. Some species of bacteria secrete it to form biofilms. Cellulose is the most abundant organic polymer on Earth. The cellulose content of cotton fibre is 90%, that of wood is 40–50%, and that of dried hemp is approximately 57%. Cellulose is mainly used to produce paperboard and paper. Smaller quantities are converted into a wide variety of derivative products such as cellophane and rayon. Conversion of cellulose from energy crops into biofuels such as cellulosic ethanol is under development as a renewable fuel source. Cellulose for industrial use is mainly obtained from wood pulp and cotton. Cellulose is also greatly affected by direct interaction with several organic liquids. Some animals, particularly ruminants and termites, can digest cellulose with the help of symbiotic micro-organisms that live in their guts, such as Trichonympha. In human nutrition, cellulose is a non-digestible constituent of insoluble dietary fiber, acting as a hydrophilic bulking agent for feces and potentially aiding in defecation. History Cellulose was discovered in 1838 by the French chemist Anselme Payen, who isolated it from plant matter and determined its chemical formula.<ref>Payen, A. (1838) "Mémoire sur la composition du tissu propre des plantes et du ligneux" (Memoir on the composition of the tissue of plants and of woody [material]), Comptes rendus, vol. 7, pp. 1052–1056. Payen added appendices to this paper on December 24, 1838 (see: Comptes rendus, vol. 8, p. 169 (1839)) and on February 4, 1839 (see: Comptes rendus, vol. 9, p. 149 (1839)). A committee of the French Academy of Sciences reviewed Payen's findings in : Jean-Baptiste Dumas (1839) "Rapport sur un mémoire de M. Payen, reltes rendus, vol. 8, pp. 51–53. In this report, the word "cellulose" is coined and author points out the similarity between the empirical formula of cellulose and that of "dextrine" (starch). The above articles are reprinted in: Brongniart and Guillemin, eds., Annales des sciences naturelles ..., 2nd series, vol. 11 (Paris, France: Crochard et Cie., 1839), [ pp. 21–31].</ref> Cellulose was used to produce the first successful thermoplastic polymer, celluloid, by Hyatt Manufacturing Company in 1870. Production of rayon ("artificial silk") from cellulose began in the 1890s and cellophane was invented in 1912. Hermann Staudinger determined the polymer structure of cellulose in 1920. The compound was first chemically synthesized (without the use of any biologically derived enzymes) in 1992, by Kobayashi and Shoda. Structure and properties Cellulose has no taste, is odorless, is hydrophilic with the contact angle of 20–30 degrees, is insoluble in water and most organic solvents, is chiral and is biodegradable. It was shown to melt at 467 °C in pulse tests made by Dauenhauer et al. (2016). It can be broken down chemically into its glucose units by treating it with concentrated mineral acids at high temperature. Cellulose is derived from D-glucose units, which condense through β(1→4)-glycosidic bonds. This linkage motif contrasts with that for α(1→4)-glycosidic bonds present in starch and glycogen. Cellulose is a straight chain polymer. Unlike starch, no coiling or branching occurs and the molecule adopts an extended and rather stiff rod-like conformation, aided by the equatorial conformation of the glucose residues. The multiple hydroxyl groups on the glucose from one chain form hydrogen bonds with oxygen atoms on the same or on a neighbour chain, holding the chains firmly together side-by-side and forming microfibrils with high tensile strength. This confers tensile strength in cell walls where cellulose microfibrils are meshed into a polysaccharide matrix. The high tensile strength of plant stems and of the tree wood also arises from the arrangement of cellulose fibers intimately distributed into the lignin matrix. The mechanical role of cellulose fibers in the wood matrix responsible for its strong structural resistance, can somewhat be compared to that of the reinforcement bars in concrete, lignin playing here the role of the hardened cement paste acting as the "glue" in between the cellulose fibres. Mechanical properties of cellulose in primary plant cell wall are correlated with growth and expansion of plant cells. Live fluorescence microscopy techniques are promising in investigation of the role of cellulose in growing plant cells. Compared to starch, cellulose is also much more crystalline. Whereas starch undergoes a crystalline to amorphous transition when heated beyond 60–70 °C in water (as in cooking), cellulose requires a temperature of 320 °C and pressure of 25 MPa to become amorphous in water. Several types of cellulose are known. These forms are distinguished according to the location of hydrogen bonds between and within strands. Natural cellulose is cellulose I, with structures Iα and Iβ. Cellulose produced by bacteria and algae is enriched in Iα while cellulose of higher plants consists mainly of Iβ. Cellulose in regenerated cellulose fibers is cellulose II. The conversion of cellulose I to cellulose II is irreversible, suggesting that cellulose I is metastable and cellulose II is stable. With various chemical treatments it is possible to produce the structures cellulose III and cellulose IV. Many properties of cellulose depend on its chain length or degree of polymerization, the number of glucose units that make up one polymer molecule. Cellulose from wood pulp has typical chain lengths between 300 and 1700 units; cotton and other plant fibers as well as bacterial cellulose have chain lengths ranging from 800 to 10,000 units. Molecules with very small chain length resulting from the breakdown of cellulose are known as cellodextrins; in contrast to long-chain cellulose, cellodextrins are typically soluble in water and organic solvents. The chemical formula of cellulose is (C6H10O5)n where n is the degree of polymerization and represents the number of glucose groups. Plant-derived cellulose is usually found in a mixture with hemicellulose, lignin, pectin and other substances, while bacterial cellulose is quite pure, has a much higher water content and higher tensile strength due to higher chain lengths. Cellulose consists of fibrils with crystalline and amorphous regions. These cellulose fibrils may be individualized by mechanical treatment of cellulose pulp, often assisted by chemical oxidation or enzymatic treatment, yielding semi-flexible cellulose nanofibrils generally 200 nm to 1 μm in length depending on the treatment intensity. Cellulose pulp may also be treated with strong acid to hydrolyze the amorphous fibril regions, thereby producing short rigid cellulose nanocrystals a few 100 nm in length. These nanocelluloses are of high technological interest due to their self-assembly into cholesteric liquid crystals, production of hydrogels or aerogels, use in nanocomposites with superior thermal and mechanical properties, and use as Pickering stabilizers for emulsions. Processing Biosynthesis In plants cellulose is synthesized at the plasma membrane by rosette terminal complexes (RTCs). The RTCs are hexameric protein structures, approximately 25 nm in diameter, that contain the cellulose synthase enzymes that synthesise the individual cellulose chains. Each RTC floats in the cell's plasma membrane and "spins" a microfibril into the cell wall. RTCs contain at least three different cellulose synthases, encoded by CesA (Ces is short for "cellulose synthase") genes, in an unknown stoichiometry. Separate sets of CesA genes are involved in primary and secondary cell wall biosynthesis. There are known to be about seven subfamilies in the plant CesA superfamily, some of which include the more cryptic, tentatively-named Csl (cellulose synthase-like) enzymes. These cellulose syntheses use UDP-glucose to form the β(1→4)-linked cellulose. Bacterial cellulose is produced using the same family of proteins, although the gene is called BcsA for "bacterial cellulose synthase" or CelA for "cellulose" in many instances. In fact, plants acquired CesA from the endosymbiosis event that produced the chloroplast. All cellulose synthases known belongs to glucosyltransferase family 2 (GT2). Cellulose synthesis requires chain initiation and elongation, and the two processes are separate. Cellulose synthase (CesA) initiates cellulose polymerization using a steroid primer, sitosterol-beta-glucoside, and UDP-glucose. It then utilises UDP-D-glucose precursors to elongate the growing cellulose chain. A cellulase may function to cleave the primer from the mature chain. Cellulose is also synthesised by tunicate animals, particularly in the tests of ascidians (where the cellulose was historically termed "tunicine" (tunicin)). Breakdown (cellulolysis) Cellulolysis is the process of breaking down cellulose into smaller polysaccharides called cellodextrins or completely into glucose units; this is a hydrolysis reaction. Because cellulose molecules bind strongly to each other, cellulolysis is relatively difficult compared to the breakdown of other polysaccharides. However, this process can be significantly intensified in a proper solvent, e.g. in an ionic liquid. Most mammals have limited ability to digest dietary fibre such as cellulose. Some ruminants like cows and sheep contain certain symbiotic anaerobic bacteria (such as Cellulomonas and Ruminococcus spp.) in the flora of the rumen, and these bacteria produce enzymes called cellulases that hydrolyze cellulose. The breakdown products are then used by the bacteria for proliferation. The bacterial mass is later digested by the ruminant in its digestive system (stomach and small intestine). Horses use cellulose in their diet by fermentation in their hindgut. Some termites contain in their hindguts certain flagellate protozoa producing such enzymes, whereas others contain bacteria or may produce cellulase. The enzymes used to cleave the glycosidic linkage in cellulose are glycoside hydrolases including endo-acting cellulases and exo-acting glucosidases. Such enzymes are usually secreted as part of multienzyme complexes that may include dockerins and carbohydrate-binding modules. Breakdown (thermolysis) At temperatures above 350 °C, cellulose undergoes thermolysis (also called 'pyrolysis'), decomposing into solid char, vapors, aerosols, and gases such as carbon dioxide. Maximum yield of vapors which condense to a liquid called bio-oil is obtained at 500 °C. Semi-crystalline cellulose polymers react at pyrolysis temperatures (350–600 °C) in a few seconds; this transformation has been shown to occur via a solid-to-liquid-to-vapor transition, with the liquid (called intermediate liquid cellulose or molten cellulose) existing for only a fraction of a second. Glycosidic bond cleavage produces short cellulose chains of two-to-seven monomers comprising the melt. Vapor bubbling of intermediate liquid cellulose produces aerosols, which consist of short chain anhydro-oligomers derived from the melt. Continuing decomposition of molten cellulose produces volatile compounds including levoglucosan, furans, pyrans, light oxygenates, and gases via primary reactions. Within thick cellulose samples, volatile compounds such as levoglucosan undergo 'secondary reactions' to volatile products including pyrans and light oxygenates such as glycolaldehyde. Hemicellulose Hemicelluloses are polysaccharides related to cellulose that comprises about 20% of the biomass of land plants. In contrast to cellulose, hemicelluloses are derived from several sugars in addition to glucose, especially xylose but also including mannose, galactose, rhamnose, and arabinose. Hemicelluloses consist of shorter chains – between 500 and 3000 sugar units. Furthermore, hemicelluloses are branched, whereas cellulose is unbranched. Regenerated cellulose Cellulose is soluble in several kinds of media, several of which are the basis of commercial technologies. These dissolution processes are reversible and are used in the production of regenerated celluloses (such as viscose and cellophane) from dissolving pulp. The most important solubilizing agent is carbon disulfide in the presence of alkali. Other agents include Schweizer's reagent, N-methylmorpholine N-oxide, and lithium chloride in dimethylacetamide. In general, these agents modify the cellulose, rendering it soluble. The agents are then removed concomitant with the formation of fibers. Cellulose is also soluble in many kinds of ionic liquids. The history of regenerated cellulose is often cited as beginning with George Audemars, who first manufactured regenerated nitrocellulose fibers in 1855. Although these fibers were soft and strong -resembling silk- they had the drawback of being highly flammable. Hilaire de Chardonnet perfected production of nitrocellulose fibers, but manufacturing of these fibers by his process was relatively uneconomical. In 1890, L.H. Despeissis invented the cuprammonium process – which uses a cuprammonium solution to solubilize cellulose – a method still used today for production of artificial silk. In 1891, it was discovered that treatment of cellulose with alkali and carbon disulfide generated a soluble cellulose derivative known as viscose. This process, patented by the founders of the Viscose Development Company, is the most widely used method for manufacturing regenerated cellulose products. Courtaulds purchased the patents for this process in 1904, leading to significant growth of viscose fiber production. By 1931, expiration of patents for the viscose process led to its adoption worldwide. Global production of regenerated cellulose fiber peaked in 1973 at 3,856,000 tons. Regenerated cellulose can be used to manufacture a wide variety of products. While the first application of regenerated cellulose was as a clothing textile, this class of materials is also used in the production of disposable medical devices as well as fabrication of artificial membranes. Cellulose esters and ethers The hydroxyl groups (−OH) of cellulose can be partially or fully reacted with various reagents to afford derivatives with useful properties like mainly cellulose esters and cellulose ethers (−OR). In principle, although not always in current industrial practice, cellulosic polymers are renewable resources. Ester derivatives include: Cellulose acetate and cellulose triacetate are film- and fiber-forming materials that find a variety of uses. Nitrocellulose was initially used as an explosive and was an early film forming material. When plasticized with camphor, nitrocellulose gives celluloid. Cellulose Ether derivatives include: The sodium carboxymethyl cellulose can be cross-linked to give the croscarmellose sodium (E468) for use as a disintegrant in pharmaceutical formulations. Furthermore, by the covalent attachment of thiol groups to cellulose ethers such as sodium carboxymethyl cellulose, ethyl cellulose or hydroxyethyl cellulose mucoadhesive and permeation enhancing properties can be introduced. Thiolated cellulose derivatives (see thiomers) exhibit also high binding properties for metal ions. Commercial applications Cellulose for industrial use is mainly obtained from wood pulp and from cotton. Paper products: Cellulose is the major constituent of paper, paperboard, and card stock. Electrical insulation paper: Cellulose is used in diverse forms as insulation in transformers, cables, and other electrical equipment. Fibres: Cellulose is the main ingredient of textiles. Cotton and synthetics (nylons) each have about 40% market by volume. Other plant fibres (jute, sisal, hemp) represent about 20% of the market. Rayon, cellophane and other "regenerated cellulose fibres" are a small portion (5%). Consumables: Microcrystalline cellulose (E460i) and powdered cellulose (E460ii) are used as inactive fillers in drug tablets and a wide range of soluble cellulose derivatives, E numbers E461 to E469, are used as emulsifiers, thickeners and stabilizers in processed foods. Cellulose powder is, for example, used in processed cheese to prevent caking inside the package. Cellulose occurs naturally in some foods and is an additive in manufactured foods, contributing an indigestible component used for texture and bulk, potentially aiding in defecation. Building material: Hydroxyl bonding of cellulose in water produces a sprayable, moldable material as an alternative to the use of plastics and resins. The recyclable material can be made water- and fire-resistant. It provides sufficient strength for use as a building material. Cellulose insulation made from recycled paper is becoming popular as an environmentally preferable material for building insulation. It can be treated with boric acid as a fire retardant. Miscellaneous: Cellulose can be converted into cellophane, a thin transparent film. It is the base material for the celluloid that was used for photographic and movie films until the mid-1930s. Cellulose is used to make water-soluble adhesives and binders such as methyl cellulose and carboxymethyl cellulose which are used in wallpaper paste. Cellulose is further used to make hydrophilic and highly absorbent sponges. Cellulose is the raw material in the manufacture of nitrocellulose (cellulose nitrate) which is used in smokeless gunpowder. Pharmaceuticals: Cellulose derivatives, such as microcrystalline cellulose (MCC), have the advantages of retaining water, being a stabilizer and thickening agent, and in reinforcement of drug tablets. Aspirational Energy crops: The major combustible component of non-food energy crops is cellulose, with lignin second. Non-food energy crops produce more usable energy than edible energy crops (which have a large starch component), but still compete with food crops for agricultural land and water resources. Typical non-food energy crops include industrial hemp, switchgrass, Miscanthus, Salix (willow), and Populus (poplar) species. A strain of Clostridium'' bacteria found in zebra dung, can convert nearly any form of cellulose into butanol fuel. Another possible application is as Insect repellents.
Biology and health sciences
Biochemistry and molecular biology
null
6920
https://en.wikipedia.org/wiki/Column
Column
A column or pillar in architecture and structural engineering is a structural element that transmits, through compression, the weight of the structure above to other structural elements below. In other words, a column is a compression member. The term column applies especially to a large round support (the shaft of the column) with a capital and a base or pedestal, which is made of stone, or appearing to be so. A small wooden or metal support is typically called a post. Supports with a rectangular or other non-round section are usually called piers. For the purpose of wind or earthquake engineering, columns may be designed to resist lateral forces. Other compression members are often termed "columns" because of the similar stress conditions. Columns are frequently used to support beams or arches on which the upper parts of walls or ceilings rest. In architecture, "column" refers to such a structural element that also has certain proportional and decorative features. These beautiful columns are available in a broad selection of styles and designs in round tapered, round straight, or square shaft styles. A column might also be a decorative element not needed for structural purposes; many columns are engaged, that is to say form part of a wall. A long sequence of columns joined by an entablature is known as a colonnade. History Antiquity All significant Iron Age civilizations of the Near East and Mediterranean made some use of columns. Egyptian In ancient Egyptian architecture as early as 2600 BC, the architect Imhotep made use of stone columns whose surface was carved to reflect the organic form of bundled reeds, like papyrus, lotus and palm. In later Egyptian architecture faceted cylinders were also common. Their form is thought to derive from archaic reed-built shrines. Carved from stone, the columns were highly decorated with carved and painted hieroglyphs, texts, ritual imagery and natural motifs. Egyptian columns are famously present in the Great Hypostyle Hall of Karnak (), where 134 columns are lined up in sixteen rows, with some columns reaching heights of 24 metres. One of the most important type are the papyriform columns. The origin of these columns goes back to the 5th Dynasty. They are composed of lotus (papyrus) stems which are drawn together into a bundle decorated with bands: the capital, instead of opening out into the shape of a bellflower, swells out and then narrows again like a flower in bud. The base, which tapers to take the shape of a half-sphere like the stem of the lotus, has a continuously recurring decoration of stipules. Greek and Roman The Minoans used whole tree-trunks, usually turned upside down in order to prevent re-growth, stood on a base set in the stylobate (floor base) and topped by a simple round capital. These were then painted as in the most famous Minoan palace of Knossos. The Minoans employed columns to create large open-plan spaces, light-wells and as a focal point for religious rituals. These traditions were continued by the later Mycenaean civilization, particularly in the megaron or hall at the heart of their palaces. The importance of columns and their reference to palaces and therefore authority is evidenced in their use in heraldic motifs such as the famous lion-gate of Mycenae where two lions stand each side of a column. Being made of wood these early columns have not survived, but their stone bases have and through these we may see their use and arrangement in these palace buildings. The Egyptians, Persians and other civilizations mostly used columns for the practical purpose of holding up the roof inside a building, preferring outside walls to be decorated with reliefs or painting, but the Ancient Greeks, followed by the Romans, loved to use them on the outside as well, and the extensive use of columns on the interior and exterior of buildings is one of the most characteristic features of classical architecture, in buildings like the Parthenon. The Greeks developed the classical orders of architecture, which are most easily distinguished by the form of the column and its various elements. Their Doric, Ionic, and Corinthian orders were expanded by the Romans to include the Tuscan and Composite orders. Persian Some of the most elaborate columns in the ancient world were those of the Persians, especially the massive stone columns erected in Persepolis. They included double-bull structures in their capitals. The Hall of Hundred Columns at Persepolis, measuring 70 × 70 metres, was built by the Achaemenid king Darius I (524–486 BC). Many of the ancient Persian columns are standing, some being more than 30 metres tall. Tall columns with bull's head capitals were used for porticoes and to support the roofs of the hypostylehall, partly inspired by the ancient Egyptian precedent. Since the columns carried timber beams rather than stone, they could be taller, slimmer and more widely spaced than Egyptian ones. Middle Ages Columns, or at least large structural exterior ones, became much less significant in the architecture of the Middle Ages. The classical forms were abandoned in both Byzantine and Romanesque architecture in favour of more flexible forms, with capitals often using various types of foliage decoration, and in the West scenes with figures carved in relief. During the Romanesque period, builders continued to reuse and imitate ancient Roman columns wherever possible; where new, the emphasis was on elegance and beauty, as illustrated by twisted columns. Often they were decorated with mosaics. Renaissance and later styles Renaissance architecture was keen to revive the classical vocabulary and styles, and the informed use and variation of the classical orders remained fundamental to the training of architects throughout Baroque, Rococo and Neo-classical architecture. Structure Early columns were constructed of stone, some out of a single piece of stone. Monolithic columns are among the heaviest stones used in architecture. Other stone columns are created out of multiple sections of stone, mortared or dry-fit together. In many classical sites, sectioned columns were carved with a centre hole or depression so that they could be pegged together, using stone or metal pins. The design of most classical columns incorporates entasis (the inclusion of a slight outward curve in the sides) plus a reduction in diameter along the height of the column, so that the top is as little as 83% of the bottom diameter. This reduction mimics the parallax effects which the eye expects to see, and tends to make columns look taller and straighter than they are while entasis adds to that effect. There are flutes and fillets that run up the shaft of columns. The flute is the part of the column that is indented in with a semi circular shape. The fillet of the column is the part between each of the flutes on the Ionic order columns. The flute width changes on all tapered columns as it goes up the shaft and stays the same on all non tapered columns. This was done to the columns to add visual interest to them. The Ionic and the Corinthian are the only orders that have fillets and flutes. The Doric style has flutes but not fillets. Doric flutes are connected at a sharp point where the fillets are located on Ionic and Corinthian order columns. Nomenclature Most classical columns arise from a basis, or base, that rests on the stylobate, or foundation, except for those of the Doric order, which usually rest directly on the stylobate. The basis may consist of several elements, beginning with a wide, square slab known as a plinth. The simplest bases consist of the plinth alone, sometimes separated from the column by a convex circular cushion known as a torus. More elaborate bases include two toruses, separated by a concave section or channel known as a scotia or trochilus. Scotiae could also occur in pairs, separated by a convex section called an astragal, or bead, narrower than a torus. Sometimes these sections were accompanied by still narrower convex sections, known as annulets or fillets. At the top of the shaft is a capital, upon which the roof or other architectural elements rest. In the case of Doric columns, the capital usually consists of a round, tapering cushion, or echinus, supporting a square slab, known as an abax or abacus. Ionic capitals feature a pair of volutes, or scrolls, while Corinthian capitals are decorated with reliefs in the form of acanthus leaves. Either type of capital could be accompanied by the same moldings as the base. In the case of free-standing columns, the decorative elements atop the shaft are known as a finial. Modern columns may be constructed out of steel, poured or precast concrete, or brick, left bare or clad in an architectural covering, or veneer. Used to support an arch, an impost, or pier, is the topmost member of a column. The bottom-most part of the arch, called the springing, rests on the impost. Equilibrium, instability, and loads As the axial load on a perfectly straight slender column with elastic material properties is increased in magnitude, this ideal column passes through three states: stable equilibrium, neutral equilibrium, and instability. The straight column under load is in stable equilibrium if a lateral force, applied between the two ends of the column, produces a small lateral deflection which disappears and the column returns to its straight form when the lateral force is removed. If the column load is gradually increased, a condition is reached in which the straight form of equilibrium becomes so-called neutral equilibrium, and a small lateral force will produce a deflection that does not disappear and the column remains in this slightly bent form when the lateral force is removed. The load at which neutral equilibrium of a column is reached is called the critical or buckling load. The state of instability is reached when a slight increase of the column load causes uncontrollably growing lateral deflections leading to complete collapse. For an axially loaded straight column with any end support conditions, the equation of static equilibrium, in the form of a differential equation, can be solved for the deflected shape and critical load of the column. With hinged, fixed or free end support conditions the deflected shape in neutral equilibrium of an initially straight column with uniform cross section throughout its length always follows a partial or composite sinusoidal curve shape, and the critical load is given by where E = elastic modulus of the material, Imin = the minimal moment of inertia of the cross section, and L = actual length of the column between its two end supports. A variant of (1) is given by where r = radius of gyration of column cross-section which is equal to the square root of (I/A), K = ratio of the longest half sine wave to the actual column length, Et = tangent modulus at the stress Fcr, and KL = effective length (length of an equivalent hinged-hinged column). From Equation (2) it can be noted that the buckling strength of a column is inversely proportional to the square of its length. When the critical stress, Fcr (Fcr =Pcr/A, where A = cross-sectional area of the column), is greater than the proportional limit of the material, the column is experiencing inelastic buckling. Since at this stress the slope of the material's stress-strain curve, Et (called the tangent modulus), is smaller than that below the proportional limit, the critical load at inelastic buckling is reduced. More complex formulas and procedures apply for such cases, but in its simplest form the critical buckling load formula is given as Equation (3), A column with a cross section that lacks symmetry may suffer torsional buckling (sudden twisting) before, or in combination with, lateral buckling. The presence of the twisting deformations renders both theoretical analyses and practical designs rather complex. Eccentricity of the load, or imperfections such as initial crookedness, decreases column strength. If the axial load on the column is not concentric, that is, its line of action is not precisely coincident with the centroidal axis of the column, the column is characterized as eccentrically loaded. The eccentricity of the load, or an initial curvature, subjects the column to immediate bending. The increased stresses due to the combined axial-plus-flexural stresses result in a reduced load-carrying ability. Column elements are considered to be massive if their smallest side dimension is equal to or more than 400 mm. Massive columns have the ability to increase in carrying strength over long time periods (even during periods of heavy load). Taking into account the fact, that possible structural loads may increase over time as well (and also the threat of progressive failure), massive columns have an advantage compared to non-massive ones. Extensions When a column is too long to be built or transported in one piece, it has to be extended or spliced at the construction site. A reinforced concrete column is extended by having the steel reinforcing bars protrude a few inches or feet above the top of the concrete, then placing the next level of reinforcing bars to overlap, and pouring the concrete of the next level. A steel column is extended by welding or bolting splice plates on the flanges and webs or walls of the columns to provide a few inches or feet of load transfer from the upper to the lower column section. A timber column is usually extended by the use of a steel tube or wrapped-around sheet-metal plate bolted onto the two connecting timber sections. Foundations A column that carries the load down to a foundation must have means to transfer the load without overstressing the foundation material. Reinforced concrete and masonry columns are generally built directly on top of concrete foundations. When seated on a concrete foundation, a steel column must have a base plate to spread the load over a larger area, and thereby reduce the bearing pressure. The base plate is a thick, rectangular steel plate usually welded to the bottom end of the column. Orders The Roman author Vitruvius, relying on the writings (now lost) of Greek authors, tells us that the ancient Greeks believed that their Doric order developed from techniques for building in wood. The earlier smoothed tree-trunk was replaced by a stone cylinder. Doric order The Doric order is the oldest and simplest of the classical orders. It is composed of a vertical cylinder that is wider at the bottom. It generally has neither a base nor a detailed capital. It is instead often topped with an inverted frustum of a shallow cone or a cylindrical band of carvings. It is often referred to as the masculine order because it is represented in the bottom level of the Colosseum and the Parthenon, and was therefore considered to be able to hold more weight. The height-to-thickness ratio is about 8:1. The shaft of a Doric Column is almost always fluted. The Greek Doric, developed in the western Dorian region of Greece, is the heaviest and most massive of the orders. It rises from the stylobate without any base; it is from four to six times as tall as its diameter; it has twenty broad flutes; the capital consists simply of a banded necking swelling out into a smooth echinus, which carries a flat square abacus; the Doric entablature is also the heaviest, being about one-fourth the height column. The Greek Doric order was not used after c. 100 B.C. until its “rediscovery” in the mid-eighteenth century. Tuscan order The Tuscan order, also known as Roman Doric, is also a simple design, the base and capital both being series of cylindrical disks of alternating diameter. The shaft is almost never fluted. The proportions vary, but are generally similar to Doric columns. Height to width ratio is about 7:1. Ionic order The Ionic column is considerably more complex than the Doric or Tuscan. It usually has a base and the shaft is often fluted (it has grooves carved up its length). The capital features a volute, an ornament shaped like a scroll, at the four corners. The height-to-thickness ratio is around 9:1. Due to the more refined proportions and scroll capitals, the Ionic column is sometimes associated with academic buildings. Ionic style columns were used on the second level of the Colosseum. Corinthian order The Corinthian order is named for the Greek city-state of Corinth, to which it was connected in the period. However, according to the architectural historian Vitruvius, the column was created by the sculptor Callimachus, probably an Athenian, who drew acanthus leaves growing around a votive basket. In fact, the oldest known Corinthian capital was found in Bassae, dated at 427 BC. It is sometimes called the feminine order because it is on the top level of the Colosseum and holding up the least weight, and also has the slenderest ratio of thickness to height. Height to width ratio is about 10:1. Composite order The Composite order draws its name from the capital being a composite of the Ionic and Corinthian capitals. The acanthus of the Corinthian column already has a scroll-like element, so the distinction is sometimes subtle. Generally the Composite is similar to the Corinthian in proportion and employment, often in the upper tiers of colonnades. Height to width ratio is about 11:1 or 12:1. Solomonic A Solomonic column, sometimes called "barley sugar", begins on a base and ends in a capital, which may be of any order, but the shaft twists in a tight spiral, producing a dramatic, serpentine effect of movement. Solomonic columns were developed in the ancient world, but remained rare there. A famous marble set, probably 2nd century, was brought to Old St. Peter's Basilica by Constantine I, and placed round the saint's shrine, and was thus familiar throughout the Middle Ages, by which time they were thought to have been removed from the Temple of Jerusalem. The style was used in bronze by Bernini for his spectacular St. Peter's baldachin, actually a ciborium (which displaced Constantine's columns), and thereafter became very popular with Baroque and Rococo church architects, above all in Latin America, where they were very often used, especially on a small scale, as they are easy to produce in wood by turning on a lathe (hence also the style's popularity for spindles on furniture and stairs). Caryatid A Caryatid is a sculpted female figure serving as an architectural support taking the place of a column or a pillar supporting an entablature on her head. The Greek term literally means "maidens of Karyai", an ancient town of Peloponnese. Engaged columns In architecture, an engaged column is a column embedded in a wall and partly projecting from the surface of the wall, sometimes defined as semi or three-quarter detached. Engaged columns are rarely found in classical Greek architecture, and then only in exceptional cases, but in Roman architecture they exist in abundance, most commonly embedded in the cella walls of pseudoperipteral buildings. Pillar tombs Pillar tombs are monumental graves, which typically feature a single, prominent pillar or column, often made of stone. A number of world cultures incorporated pillars into tomb structures. In the ancient Greek colony of Lycia in Anatolia, one of these edifices is located at the tomb of Xanthos. In the town of Hannassa in southern Somalia, ruins of houses with archways and courtyards have also been found along with other pillar tombs, including a rare octagonal tomb. Gallery
Technology
Architectural elements
null
6933
https://en.wikipedia.org/wiki/Chromatin
Chromatin
Chromatin is a complex of DNA and protein found in eukaryotic cells. The primary function is to package long DNA molecules into more compact, denser structures. This prevents the strands from becoming tangled and also plays important roles in reinforcing the DNA during cell division, preventing DNA damage, and regulating gene expression and DNA replication. During mitosis and meiosis, chromatin facilitates proper segregation of the chromosomes in anaphase; the characteristic shapes of chromosomes visible during this stage are the result of DNA being coiled into highly condensed chromatin. The primary protein components of chromatin are histones. An octamer of two sets of four histone cores (Histone H2A, Histone H2B, Histone H3, and Histone H4) bind to DNA and function as "anchors" around which the strands are wound. In general, there are three levels of chromatin organization: DNA wraps around histone proteins, forming nucleosomes and the so-called beads on a string structure (euchromatin). Multiple histones wrap into a 30-nanometer fiber consisting of nucleosome arrays in their most compact form (heterochromatin). Higher-level DNA supercoiling of the 30 nm fiber produces the metaphase chromosome (during mitosis and meiosis). Many organisms, however, do not follow this organization scheme. For example, spermatozoa and avian red blood cells have more tightly packed chromatin than most eukaryotic cells, and trypanosomatid protozoa do not condense their chromatin into visible chromosomes at all. Prokaryotic cells have entirely different structures for organizing their DNA (the prokaryotic chromosome equivalent is called a genophore and is localized within the nucleoid region). The overall structure of the chromatin network further depends on the stage of the cell cycle. During interphase, the chromatin is structurally loose to allow access to RNA and DNA polymerases that transcribe and replicate the DNA. The local structure of chromatin during interphase depends on the specific genes present in the DNA. Regions of DNA containing genes which are actively transcribed ("turned on") are less tightly compacted and closely associated with RNA polymerases in a structure known as euchromatin, while regions containing inactive genes ("turned off") are generally more condensed and associated with structural proteins in heterochromatin. Epigenetic modification of the structural proteins in chromatin via methylation and acetylation also alters local chromatin structure and therefore gene expression. There is limited understanding of chromatin structure and it is active area of research in molecular biology. Dynamic chromatin structure and hierarchy Chromatin undergoes various structural changes during a cell cycle. Histone proteins are the basic packers and arrangers of chromatin and can be modified by various post-translational modifications to alter chromatin packing (histone modification). Most modifications occur on histone tails. The positively charged histone cores only partially counteract the negative charge of the DNA phosphate backbone resulting in a negative net charge of the overall structure. An imbalance of charge within the polymer causes electrostatic repulsion between neighboring chromatin regions that promote interactions with positively charged proteins, molecules, and cations. As these modifications occur, the electrostatic environment surrounding the chromatin will flux and the level of chromatin compaction will alter. The consequences in terms of chromatin accessibility and compaction depend both on the modified amino acid and the type of modification. For example, histone acetylation results in loosening and increased accessibility of chromatin for replication and transcription. Lysine trimethylation can either lead to increased transcriptional activity (trimethylation of histone H3 lysine 4) or transcriptional repression and chromatin compaction (trimethylation of histone H3, lysine 9 or lysine 27). Several studies suggested that different modifications could occur simultaneously. For example, it was proposed that a bivalent structure (with trimethylation of both lysine 4 and 27 on histone H3) is involved in early mammalian development. Another study tested the role of acetylation of histone 4 on lysine 16 on chromatin structure and found that homogeneous acetylation inhibited 30 nm chromatin formation and blocked adenosine triphosphate remodeling. This singular modification changed the dynamics of the chromatin which shows that acetylation of H4 at K16 is vital for proper intra- and inter- functionality of chromatin structure. Polycomb-group proteins play a role in regulating genes through modulation of chromatin structure. For additional information, see Chromatin variant, Histone modifications in chromatin regulation and RNA polymerase control by chromatin structure. Structure of DNA In nature, DNA can form three structures, A-, B-, and Z-DNA. A- and B-DNA are very similar, forming right-handed helices, whereas Z-DNA is a left-handed helix with a zig-zag phosphate backbone. Z-DNA is thought to play a specific role in chromatin structure and transcription because of the properties of the junction between B- and Z-DNA. At the junction of B- and Z-DNA, one pair of bases is flipped out from normal bonding. These play a dual role of a site of recognition by many proteins and as a sink for torsional stress from RNA polymerase or nucleosome binding. DNA bases are stored as a code structure with four chemical bases such as “Adenine (A), Guanine (G), Cytosine (C), and Thymine (T)”. The order and sequences of these chemical structures of DNA are reflected as information available for the creation and control of human organisms. “A with T and C with G” pairing up to build the DNA base pair. Sugar and phosphate molecules are also paired with these bases, making DNA nucleotides arrange 2 long spiral strands unitedly called “double helix”. In eukaryotes, DNA consists of a cell nucleus and the DNA is providing strength and direction to the mechanism of heredity. Moreover, between the nitrogenous bonds of the 2 DNA, homogenous bonds are forming. Nucleosomes and beads-on-a-string The basic repeat element of chromatin is the nucleosome, interconnected by sections of linker DNA, a far shorter arrangement than pure DNA in solution. In addition to core histones, a linker histone H1 exists that contacts the exit/entry of the DNA strand on the nucleosome. The nucleosome core particle, together with histone H1, is known as a chromatosome. Nucleosomes, with about 20 to 60 base pairs of linker DNA, can form, under non-physiological conditions, an approximately 11 nm beads on a string fibre. The nucleosomes bind DNA non-specifically, as required by their function in general DNA packaging. There are, however, large DNA sequence preferences that govern nucleosome positioning. This is due primarily to the varying physical properties of different DNA sequences: For instance, adenine (A), and thymine (T) is more favorably compressed into the inner minor grooves. This means nucleosomes can bind preferentially at one position approximately every 10 base pairs (the helical repeat of DNA)- where the DNA is rotated to maximise the number of A and T bases that will lie in the inner minor groove. (See nucleic acid structure.) 30-nm chromatin fiber in mitosis With addition of H1, during mitosis the beads-on-a-string structure can coil into a 30 nm-diameter helical structure known as the 30 nm fibre or filament. The precise structure of the chromatin fiber in the cell is not known in detail. This level of chromatin structure is thought to be the form of heterochromatin, which contains mostly transcriptionally silent genes. Electron microscopy studies have demonstrated that the 30 nm fiber is highly dynamic such that it unfolds into a 10 nm fiber beads-on-a-string structure when transversed by an RNA polymerase engaged in transcription. The existing models commonly accept that the nucleosomes lie perpendicular to the axis of the fibre, with linker histones arranged internally. A stable 30 nm fibre relies on the regular positioning of nucleosomes along DNA. Linker DNA is relatively resistant to bending and rotation. This makes the length of linker DNA critical to the stability of the fibre, requiring nucleosomes to be separated by lengths that permit rotation and folding into the required orientation without excessive stress to the DNA. In this view, different lengths of the linker DNA should produce different folding topologies of the chromatin fiber. Recent theoretical work, based on electron-microscopy images of reconstituted fibers supports this view. DNA loops The beads-on-a-string chromatin structure has a tendency to form loops. These loops allow interactions between different regions of DNA by bringing them closer to each other, which increases the efficiency of gene interactions. This process is dynamic, with loops forming and disappearing. The loops are regulated by two main elements: Cohesins, protein complexes that generate loops by extrusion of the DNA fiber through the ring-like structure of the complex itself. CTCF, a transcription factor that limits the frontier of the DNA loop. To stop the growth of a loop, two CTCF molecules must be positioned in opposite directions to block the movement of the cohesin ring (see video). There are many other elements involved. For example, Jpx regulates the binding sites of CTCF molecules along the DNA fiber. Spatial organization of chromatin in the cell nucleus The spatial arrangement of the chromatin within the nucleus is not random - specific regions of the chromatin can be found in certain territories. Territories are, for example, the lamina-associated domains (LADs), and the topologically associating domains (TADs), which are bound together by protein complexes. Currently, polymer models such as the Strings & Binders Switch (SBS) model and the Dynamic Loop (DL) model are used to describe the folding of chromatin within the nucleus. The arrangement of chromatin within the nucleus may also play a role in nuclear stress and restoring nuclear membrane deformation by mechanical stress. When chromatin is condensed, the nucleus becomes more rigid. When chromatin is decondensed, the nucleus becomes more elastic with less force exerted on the inner nuclear membrane. This observation sheds light on other possible cellular functions of chromatin organization outside of genomic regulation. Cell-cycle dependent structural organization Interphase: The structure of chromatin during interphase of mitosis is optimized to allow simple access of transcription and DNA repair factors to the DNA while compacting the DNA into the nucleus. The structure varies depending on the access required to the DNA. Genes that require regular access by RNA polymerase require the looser structure provided by euchromatin. Metaphase: The metaphase structure of chromatin differs vastly to that of interphase. It is optimised for physical strength and manageability, forming the classic chromosome structure seen in karyotypes. The structure of the condensed chromatin is thought to be loops of 30 nm fibre to a central scaffold of proteins. It is, however, not well-characterised. Chromosome scaffolds play an important role to hold the chromatin into compact chromosomes. Loops of 30 nm structure further condense with scaffold, into higher order structures. Chromosome scaffolds are made of proteins including condensin, type IIA topoisomerase and kinesin family member 4 (KIF4). The physical strength of chromatin is vital for this stage of division to prevent shear damage to the DNA as the daughter chromosomes are separated. To maximise strength the composition of the chromatin changes as it approaches the centromere, primarily through alternative histone H1 analogues. During mitosis, although most of the chromatin is tightly compacted, there are small regions that are not as tightly compacted. These regions often correspond to promoter regions of genes that were active in that cell type prior to chromatin formation. The lack of compaction of these regions is called bookmarking, which is an epigenetic mechanism believed to be important for transmitting to daughter cells the "memory" of which genes were active prior to entry into mitosis. This bookmarking mechanism is needed to help transmit this memory because transcription ceases during mitosis. Chromatin and bursts of transcription Chromatin and its interaction with enzymes has been researched, and a conclusion being made is that it is relevant and an important factor in gene expression. Vincent G. Allfrey, a professor at Rockefeller University, stated that RNA synthesis is related to histone acetylation. The lysine amino acid attached to the end of the histones is positively charged. The acetylation of these tails would make the chromatin ends neutral, allowing for DNA access. When the chromatin decondenses, the DNA is open to entry of molecular machinery. Fluctuations between open and closed chromatin may contribute to the discontinuity of transcription, or transcriptional bursting. Other factors are probably involved, such as the association and dissociation of transcription factor complexes with chromatin. Specifically, RNA polymerase and transcriptional proteins have been shown to congregate into droplets via phase separation, and recent studies have suggested that 10 nm chromatin demonstrates liquid-like behavior increasing the targetability of genomic DNA. The interactions between linker histones and disordered tail regions act as an electrostatic glue organizing large-scale chromatin into a dynamic, liquid-like domain. Decreased chromatin compaction comes with increased chromatin mobility and easier transcriptional access to DNA. The phenomenon, as opposed to simple probabilistic models of transcription, can account for the high variability in gene expression occurring between cells in isogenic populations. Alternative chromatin organizations During metazoan spermiogenesis, the spermatid's chromatin is remodeled into a more spaced-packaged, widened, almost crystal-like structure. This process is associated with the cessation of transcription and involves nuclear protein exchange. The histones are mostly displaced, and replaced by protamines (small, arginine-rich proteins). It is proposed that in yeast, regions devoid of histones become very fragile after transcription; HMO1, an HMG-box protein, helps in stabilizing nucleosomes-free chromatin. Chromatin and DNA repair A variety of internal and external agents can cause DNA damage in cells. Many factors influence how the repair route is selected, including the cell cycle phase and chromatin segment where the break occurred. In terms of initiating 5’ end DNA repair, the p53 binding protein 1 (53BP1) and BRCA1 are important protein components that influence double-strand break repair pathway selection. The 53BP1 complex attaches to chromatin near DNA breaks and activates downstream factors such as Rap1-Interacting Factor 1 (RIF1) and shieldin, which protects DNA ends against nucleolytic destruction. DNA damage process occurs within the condition of chromatin, and the constantly changing chromatin environment has a large effect on it. Accessing and repairing the damaged cell of DNA, the genome condenses into chromatin and repairing it through modifying the histone residues. Through altering the chromatin structure, histones residues are adding chemical groups namely phosphate, acetyl and one or more methyl groups and these control the expressions of gene building by proteins to acquire DNA. Moreover, resynthesis of the delighted zone, DNA will be repaired by processing and restructuring the damaged bases. In order to maintain genomic integrity, “homologous recombination and classical non-homologous end joining process” has been followed by DNA to be repaired. The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow the critical cellular process of DNA repair, the chromatin must be remodeled. In eukaryotes, ATP-dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of DNA damage. This process is initiated by PARP1 protein that starts to appear at DNA damage in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. Next the chromatin remodeler Alc1 quickly attaches to the product of PARP1, and completes arrival at the DNA damage within 10 seconds of the damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA damage occurrence. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Methods to investigate chromatin ChIP-seq (Chromatin immunoprecipitation sequencing) is recognized as the vastly utilized chromatin identification method it has been using the antibodies that actively selected, identify and combine with proteins including "histones, histone restructuring, transaction factors and cofactors". This has been providing data about the state of chromatin and the transaction of a gene by trimming "oligonucleotides" that are unbound. Chromatin immunoprecipitation sequencing aimed against different histone modifications, can be used to identify chromatin states throughout the genome. Different modifications have been linked to various states of chromatin. DNase-seq (DNase I hypersensitive sites Sequencing) uses the sensitivity of accessible regions in the genome to the DNase I enzyme to map open or accessible regions in the genome. FAIRE-seq (Formaldehyde-Assisted Isolation of Regulatory Elements sequencing) uses the chemical properties of protein-bound DNA in a two-phase separation method to extract nucleosome depleted regions from the genome. ATAC-seq (Assay for Transposable Accessible Chromatin sequencing) uses the Tn5 transposase to integrate (synthetic) transposons into accessible regions of the genome consequentially highlighting the localisation of nucleosomes and transcription factors across the genome. DNA footprinting is a method aimed at identifying protein-bound DNA. It uses labeling and fragmentation coupled to gel electrophoresis to identify areas of the genome that have been bound by proteins. MNase-seq (Micrococcal Nuclease sequencing) uses the micrococcal nuclease enzyme to identify nucleosome positioning throughout the genome. Chromosome conformation capture determines the spatial organization of chromatin in the nucleus, by inferring genomic locations that physically interact. MACC profiling (Micrococcal nuclease ACCessibility profiling) uses titration series of chromatin digests with micrococcal nuclease to identify chromatin accessibility as well as to map nucleosomes and non-histone DNA-binding proteins in both open and closed regions of the genome. Chromatin and knots It has been a puzzle how decondensed interphase chromosomes remain essentially unknotted. The natural expectation is that in the presence of type II DNA topoisomerases that permit passages of double-stranded DNA regions through each other, all chromosomes should reach the state of topological equilibrium. The topological equilibrium in highly crowded interphase chromosomes forming chromosome territories would result in formation of highly knotted chromatin fibres. However, Chromosome Conformation Capture (3C) methods revealed that the decay of contacts with the genomic distance in interphase chromosomes is practically the same as in the crumpled globule state that is formed when long polymers condense without formation of any knots. To remove knots from highly crowded chromatin, one would need an active process that should not only provide the energy to move the system from the state of topological equilibrium but also guide topoisomerase-mediated passages in such a way that knots would be efficiently unknotted instead of making the knots even more complex. It has been shown that the process of chromatin-loop extrusion is ideally suited to actively unknot chromatin fibres in interphase chromosomes. Chromatin: alternative definitions The term, introduced by Walther Flemming, has multiple meanings: Simple and concise definition: Chromatin is a macromolecular complex of a DNA macromolecule and protein macromolecules (and RNA). The proteins package and arrange the DNA and control its functions within the cell nucleus. A biochemists' operational definition: Chromatin is the DNA/protein/RNA complex extracted from eukaryotic lysed interphase nuclei. Just which of the multitudinous substances present in a nucleus will constitute a part of the extracted material partly depends on the technique each researcher uses. Furthermore, the composition and properties of chromatin vary from one cell type to another, during the development of a specific cell type, and at different stages in the cell cycle. The DNA + histone = chromatin definition: The DNA double helix in the cell nucleus is packaged by special proteins termed histones. The formed protein/DNA complex is called chromatin. The basic structural unit of chromatin is the nucleosome. The first definition allows for "chromatins" to be defined in other domains of life like bacteria and archaea, using any DNA-binding proteins that condenses the molecule. These proteins are usually referred to nucleoid-associated proteins (NAPs); examples include AsnC/LrpC with HU. In addition, some archaea do produce nucleosomes from proteins homologous to eukaryotic histones. Chromatin Remodeling: Chromatin remodeling can result from covalent modification of histones that physically remodel, move or remove nucleosomes. Studies of Sanosaka et al. 2022, says that Chromatin remodeler CHD7 regulate cell type-specific gene expression in human neural crest cells.
Biology and health sciences
Organelles
Biology
6944
https://en.wikipedia.org/wiki/Cathode
Cathode
A cathode is the electrode from which a conventional current leaves a polarized electrical device such as a lead-acid battery. This definition can be recalled by using the mnemonic CCD for Cathode Current Departs. A conventional current describes the direction in which positive charges move. Electrons have a negative electrical charge, so the movement of electrons is opposite to that of the conventional current flow. Consequently, the mnemonic cathode current departs also means that electrons flow into the device's cathode from the external circuit. For example, the end of a household battery marked with a + (plus) is the cathode. The electrode through which conventional current flows the other way, into the device, is termed an anode. Charge flow Conventional current flows from cathode to anode outside the cell or device (with electrons moving in the opposite direction), regardless of the cell or device type and operating mode. Cathode polarity with respect to the anode can be positive or negative depending on how the device is being operated. Inside a device or a cell, positively charged cations always move towards the cathode and negatively charged anions move towards the anode, although cathode polarity depends on the device type, and can even vary according to the operating mode. Whether the cathode is negatively polarized (such as recharging a battery) or positively polarized (such as a battery in use), the cathode will draw electrons into it from outside, as well as attract positively charged cations from inside. A battery or galvanic cell in use has a cathode that is the positive terminal since that is where conventional current flows out of the device. This outward current is carried internally by positive ions moving from the electrolyte to the positive cathode (chemical energy is responsible for this "uphill" motion). It is continued externally by electrons moving into the battery which constitutes positive current flowing outwards. For example, the Daniell galvanic cell's copper electrode is the positive terminal and the cathode. A battery that is recharging or an electrolytic cell performing electrolysis has its cathode as the negative terminal, from which current exits the device and returns to the external generator as charge enters the battery/ cell. For example, reversing the current direction in a Daniell galvanic cell converts it into an electrolytic cell where the copper electrode is the positive terminal and also the anode. In a diode, the cathode is the negative terminal at the pointed end of the arrow symbol, where current flows out of the device. Note: electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes (including cathode-ray tubes) it is the negative terminal where electrons enter the device from the external circuit and proceed into the tube's near-vacuum, constituting a positive current flowing out of the device. Etymology The word was coined in 1834 from the Greek κάθοδος (kathodos), 'descent' or 'way down', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the cathode is where the current leaves the electrolyte, on the West side: "kata downwards, 'odos a way; the way which the sun sets". The use of 'West' to mean the 'out' direction (actually 'out' → 'West' → 'sunset' → 'down', i.e. 'out of view') may appear unnecessarily contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "exode" (the doorway where the current exits). His motivation for changing it to something meaning 'the West electrode' (other candidates had been "westode", "occiode" and "dysiode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the West electrode would not have been the 'way out' any more. Therefore, "exode" would have become inappropriate, whereas "cathode" meaning 'West electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the cathode's function any more, but more importantly because, as we now know, the Earth's magnetic field direction on which the "cathode" term is based is subject to reversals whereas the current direction convention on which the "exode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember, and more durably technically correct (although historically false), etymology has been suggested: cathode, from the Greek kathodos, 'way down', 'the way (down) into the cell (or other device) for electrons'. In chemistry In chemistry, a cathode is the electrode of an electrochemical cell at which reduction occurs. The cathode can be negative like when the cell is electrolytic (where electrical energy provided to the cell is being used for decomposing chemical compounds); or positive as when the cell is galvanic (where chemical reactions are used for generating electrical energy). The cathode supplies electrons to the positively charged cations which flow to it from the electrolyte (even if the cell is galvanic, i.e., when the cathode is positive and therefore would be expected to repel the positively charged cations; this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems in a galvanic cell). The cathodic current, in electrochemistry, is the flow of electrons from the cathode interface to a species in solution. The anodic current is the flow of electrons into the anode from a species in solution. Electrolytic cell In an electrolytic cell, the cathode is where the negative polarity is applied to drive the cell. Common results of reduction at the cathode are hydrogen gas or pure metal from metal ions. When discussing the relative reducing power of two redox agents, the couple for generating the more reducing species is said to be more "cathodic" with respect to the more easily reduced reagent. Galvanic cell In a galvanic cell, the cathode is where the positive pole is connected to allow the circuit to be completed: as the anode of the galvanic cell gives off electrons, they return from the circuit into the cell through the cathode. Electroplating metal cathode (electrolysis) When metal ions are reduced from ionic solution, they form a pure metal surface on the cathode. Items to be plated with pure metal are attached to and become part of the cathode in the electrolytic solution. In electronics Vacuum tubes In a vacuum tube or electronic vacuum system, the cathode is a metal surface which emits free electrons into the evacuated space. Since the electrons are attracted to the positive nuclei of the metal atoms, they normally stay inside the metal and require energy to leave it; this is called the work function of the metal. Cathodes are induced to emit electrons by several mechanisms: Thermionic emission: The cathode can be heated. The increased thermal motion of the metal atoms "knocks" electrons out of the surface, an effect called thermionic emission. This technique is used in most vacuum tubes. Field electron emission: A strong electric field can be applied to the surface by placing an electrode with a high positive voltage near the cathode. The positively charged electrode attracts the electrons, causing some electrons to leave the cathode's surface. This process is used in cold cathodes in some electron microscopes, and in microelectronics fabrication, Secondary emission: An electron, atom or molecule colliding with the surface of the cathode with enough energy can knock electrons out of the surface. These electrons are called secondary electrons. This mechanism is used in gas-discharge lamps such as neon lamps. Photoelectric emission: Electrons can also be emitted from the electrodes of certain metals when light of frequency greater than the threshold frequency falls on it. This effect is called photoelectric emission, and the electrons produced are called photoelectrons. This effect is used in phototubes and image intensifier tubes. Cathodes can be divided into two types: Hot cathode A hot cathode is a cathode that is heated by a filament to produce electrons by thermionic emission. The filament is a thin wire of a refractory metal like tungsten heated red-hot by an electric current passing through it. Before the advent of transistors in the 1960s, virtually all electronic equipment used hot-cathode vacuum tubes. Today hot cathodes are used in vacuum tubes in radio transmitters and microwave ovens, to produce the electron beams in older cathode-ray tube (CRT) type televisions and computer monitors, in x-ray generators, electron microscopes, and fluorescent tubes. There are two types of hot cathodes: Directly heated cathode: In this type, the filament itself is the cathode and emits the electrons directly. Directly heated cathodes were used in the first vacuum tubes, but today they are only used in fluorescent tubes, some large transmitting vacuum tubes, and all X-ray tubes. Indirectly heated cathode: In this type, the filament is not the cathode but rather heats the cathode which then emits electrons. Indirectly heated cathodes are used in most devices today. For example, in most vacuum tubes the cathode is a nickel tube with the filament inside it, and the heat from the filament causes the outside surface of the tube to emit electrons. The filament of an indirectly heated cathode is usually called the heater. The main reason for using an indirectly heated cathode is to isolate the rest of the vacuum tube from the electric potential across the filament. Many vacuum tubes use alternating current to heat the filament. In a tube in which the filament itself was the cathode, the alternating electric field from the filament surface would affect the movement of the electrons and introduce hum into the tube output. It also allows the filaments in all the tubes in an electronic device to be tied together and supplied from the same current source, even though the cathodes they heat may be at different potentials. In order to improve electron emission, cathodes are treated with chemicals, usually compounds of metals with a low work function. Treated cathodes require less surface area, lower temperatures and less power to supply the same cathode current. The untreated tungsten filaments used in early tubes (called "bright emitters") had to be heated to , white-hot, to produce sufficient thermionic emission for use, while modern coated cathodes produce far more electrons at a given temperature so they only have to be heated to There are two main types of treated cathodes: Coated cathode – In these the cathode is covered with a coating of alkali metal oxides, often barium and strontium oxide. These are used in low-power tubes. Thoriated tungsten – In high-power tubes, ion bombardment can destroy the coating on a coated cathode. In these tubes a directly heated cathode consisting of a filament made of tungsten incorporating a small amount of thorium is used. The layer of thorium on the surface which reduces the work function of the cathode is continually replenished as it is lost by diffusion of thorium from the interior of the metal. Cold cathode This is a cathode that is not heated by a filament. They may emit electrons by field electron emission, and in gas-filled tubes by secondary emission. Some examples are electrodes in neon lights, cold-cathode fluorescent lamps (CCFLs) used as backlights in laptops, thyratron tubes, and Crookes tubes. They do not necessarily operate at room temperature; in some devices the cathode is heated by the electron current flowing through it to a temperature at which thermionic emission occurs. For example, in some fluorescent tubes a momentary high voltage is applied to the electrodes to start the current through the tube; after starting the electrodes are heated enough by the current to keep emitting electrons to sustain the discharge. Cold cathodes may also emit electrons by photoelectric emission. These are often called photocathodes and are used in phototubes used in scientific instruments and image intensifier tubes used in night vision goggles. Diodes In a semiconductor diode, the cathode is the N–doped layer of the p–n junction with a high density of free electrons due to doping, and an equal density of fixed positive charges, which are the dopants that have been thermally ionized. In the anode, the converse applies: It features a high density of free "holes" and consequently fixed negative dopants which have captured an electron (hence the origin of the holes). When P and N-doped layers are created adjacent to each other, diffusion ensures that electrons flow from high to low density areas: That is, from the N to the P side. They leave behind the fixed positively charged dopants near the junction. Similarly, holes diffuse from P to N leaving behind fixed negative ionised dopants near the junction. These layers of fixed positive and negative charges are collectively known as the depletion layer because they are depleted of free electrons and holes. The depletion layer at the junction is at the origin of the diode's rectifying properties. This is due to the resulting internal field and corresponding potential barrier which inhibit current flow in reverse applied bias which increases the internal depletion layer field. Conversely, they allow it in forwards applied bias where the applied bias reduces the built in potential barrier. Electrons which diffuse from the cathode into the P-doped layer, or anode, become what are termed "minority carriers" and tend to recombine there with the majority carriers, which are holes, on a timescale characteristic of the material which is the p-type minority carrier lifetime. Similarly, holes diffusing into the N-doped layer become minority carriers and tend to recombine with electrons. In equilibrium, with no applied bias, thermally assisted diffusion of electrons and holes in opposite directions across the depletion layer ensure a zero net current with electrons flowing from cathode to anode and recombining, and holes flowing from anode to cathode across the junction or depletion layer and recombining. Like a typical diode, there is a fixed anode and cathode in a Zener diode, but it will conduct current in the reverse direction (electrons flow from anode to cathode) if its breakdown voltage or "Zener voltage" is exceeded.
Physical sciences
Electrochemistry
Chemistry
6948
https://en.wikipedia.org/wiki/Crossbow
Crossbow
A crossbow is a ranged weapon using an elastic launching device consisting of a bow-like assembly called a prod, mounted horizontally on a main frame called a tiller, which is hand-held in a similar fashion to the stock of a long gun. Crossbows shoot arrow-like projectiles called bolts or quarrels. A person who shoots crossbow is called a crossbowman, an arbalister or an arbalist (after the arbalest, a European crossbow variant used during the 12th century). Crossbows and bows use the same elastic launch principles, but differ in that an archer using a bow must draw-and-shoot in a quick and smooth motion with limited or no time for aiming, while a crossbow's design allows it to be spanned and cocked ready for use at a later time and thus affording them unlimited time to aim. When shooting bows, the archer must fully perform the draw, holding the string and arrow using various techniques while pulling it back with arm and back muscles, and then either immediately shooting instinctively without a period of aiming, or holding that form while aiming. Both demand some physical strength to do so using bows suitable for warfare, though this is easier using lighter draw-weight hunting bows. As such, their accurate and sustained use in warfare takes much practice. Crossbows avoid these potential problems by having trigger-released cocking mechanisms to maintain the tension on the string once it has been spanned – drawn – into its ready-to-shoot position, allowing these weapons to be carried cocked and ready and affording their users time to aim them. This also allows them to be readied by someone assisting their users, so multiple crossbows can be used one after the other while others reload and ready them. Crossbows are spanned into their cocked positions using a number of techniques and devices, some of which are mechanical and employ gear and pulley arrangements – levers, belt hooks, pulleys, windlasses and cranequins – to overcome very high draw weight. These potentially achieve better precision and enable their effective use by less familiarised and trained personnel, whereas the simple and composite warbows of, for example, the English and the steppe nomads require years of training, practice and familiarisation. These advantages for the crossbow are somewhat offset by the longer time needed to reload a crossbow for further shots, with the crossbows with high draw weights requiring sophisticated systems of gears and pulleys to overcome their huge draw weights that are very slow and rather awkward to employ on the battlefield. Medieval crossbows were also very inefficient, with short shot stroke lengths from the string lock to the release point of their bolts, along with the slower speeds of their steel prods and heavy strings, despite their massive draw weights compared to bows, though modern materials and crossbow designs overcome these shortcomings. The earliest known crossbows were made in the first millennium BC, as early as the 7th century BC in ancient China and as early as the 1st century AD in Greece (known as gastraphetes). Crossbows brought about a major shift in the role of projectile weaponry in wars, such as during Qin's unification wars and later Han campaigns against northern nomads and western states. The medieval European crossbow was called by many names, including "crossbow" itself; most of these names derived from the word ballista, an ancient Greek torsion siege engine similar in appearance but different in design principle. In modern times, firearms have largely supplanted bows and crossbows as weapons of war, but crossbows remain widely used for competitive shooting sports and hunting, and for relatively silent shooting. Terminology A crossbowman is sometimes called an arbalist, or historically an arbalister.Arrow, bolt and quarrel are all suitable terms for crossbow projectiles, as was vire historically. The lath, also called the prod, is the bow of the crossbow. According to W. F. Peterson, prod came into usage in the 19th century as a result of mistranslating rodd in a 16th-century list of crossbow effects. The stock (a modern term derived from the equivalent concept in firearms) is the wooden body on which the bow is mounted, although the medieval tiller is also used. The lock refers to the release mechanism, including the string, sears, trigger lever, and housing. Construction A crossbow is essentially a bow mounted on an elongated frame (called a tiller or stock) with a built-in mechanism that holds the drawn bow string, as well as a trigger mechanism, which is used to release the string. Chinese vertical trigger lock The Chinese trigger was a mechanism typically composed of three cast bronze pieces housed inside a hollow bronze enclosure. The entire mechanism is then dropped into a carved slot within the tiller and secured together by two bronze rods. The string catch (nut) is shaped like a "J" because it usually has a tall erect rear spine that protrudes above the housing, which serves the function of both a cocking lever (by pushing the drawn string onto it) and a primitive rear sight. It is held stationary against tension by the second piece, which is shaped like a flattened "C" and acts as the sear. The sear cannot move as it is trapped by the third piece, i.e. the actual trigger blade, which hangs vertically below the enclosure and catches the sear via a notch. The two bearing surfaces between the three trigger pieces each offers a mechanical advantage, which allow for handling significant draw weights with a much smaller pull weight. During shooting, the user will hold the crossbow at eye level by a vertical handle and aim along the arrow using the sighting spine for elevation, similar to how a modern rifleman shoots with iron sights. When the trigger blade is pulled, its notch disengages from the sear and allows the latter to drop downwards, which in turn frees up the nuts to pivot forward and release the bowstring. European rolling nut lock The earliest European designs featured a transverse slot in the top surface of the frame, down into which the string was placed. To shoot this design, a vertical rod is thrust up through a hole in the bottom of the notch, forcing the string out. This rod is usually attached perpendicular to a rear-facing lever called a tickler. A later design implemented a rolling cylindrical pawl called a nut to retain the string. This nut has a perpendicular centre slot for the bolt, and an intersecting axial slot for the string, along with a lower face or slot against which the internal trigger sits. They often also have some form of strengthening internal sear or trigger face, usually of metal. These roller nuts were either free-floating in their close-fitting hole across the stock, tied in with a binding of sinew or other strong cording; or mounted on a metal axle or pins. Removable or integral plates of wood, ivory, or metal on the sides of the stock kept the nut in place laterally. Nuts were made of antler, bone, or metal. Bows could be kept taut and ready to shoot for some time with little physical straining, allowing crossbowmen to aim better without fatiguing. Bow Chinese crossbow bows were made of composite material from the start. European crossbows from the 10th to 12th centuries used wood for the bow, also called the prod or lath, which tended to be ash or yew. Composite bows started appearing in Europe during the 13th century and could be made from layers of different material, often wood, horn, and sinew glued together and bound with animal tendon. These composite bows made of several layers are much stronger and more efficient in releasing energy than simple wooden bows. As steel became more widely available in Europe around the 14th century, steel prods came into use. Traditionally, the prod was often lashed to the stock with rope, whipcord, or other strong cording. This is called the bridle. Spanning mechanism The Chinese used winches for large crossbows mounted on fortifications or wagons, known as "bedded crossbows" (床弩). Winches may have been used for handheld crossbows during the Han dynasty (202 BC – 9 AD, 25–220 AD), but there is only one known depiction of it. The 11th century Chinese military text Wujing Zongyao mentions types of crossbows using winch mechanisms, but it is not known if these were actually handheld crossbows or mounted crossbows. Another drawing method involved the shooters sitting on the ground, and using the combined strength of leg, waist, back and arm muscles to help span much heavier crossbows, which were aptly called "waist-spun crossbows" (腰張弩). During the medieval era, both Chinese and European crossbows used stirrups as well as belt hooks. In the 13th century, European crossbows started using winches, and from the 14th century an assortment of spanning mechanisms such as winch pulleys, cord pulleys, gaffles (such as gaffe levers, goat's foot levers, and rarer internal lever-action mechanisms), cranequins, and even screws. Variants The smallest crossbows are pistol crossbows. Others are simple long stocks with the crossbow mounted on them. These could be shot from under the arm. The next step in development was stocks of the shape that would later be used for firearms, which allowed better aiming. The arbalest was a heavy crossbow that required special systems for pulling the sinew via windlasses. For siege warfare, the size of crossbows was further increased to hurl large projectiles, such as rocks, at fortifications. The required crossbows needed a massive base frame and powerful windlass devices. Projectiles The arrow-like projectiles of a crossbow are called bolts or quarrels. These are usually much shorter than arrows but can be several times heavier. There is an optimum weight for bolts to achieve maximum kinetic energy, which varies depending on the strength and characteristics of the crossbow, but most could pass through common mail. Crossbow bolts can be fitted with a variety of heads, some with sickle-shaped heads to cut rope or rigging; but the most common today is a four-sided point called a quarrel. A highly specialized type of bolt is employed to collect blubber biopsy samples used in biology research. Even relatively small differences in arrow weight can have a considerable impact on its flight trajectory and drop. Bullet-shooting crossbows are modified crossbows that use bullets or stones as projectiles. Accessories The ancient Chinese crossbow often included a metal (i.e. bronze or steel) grid serving as iron sights. Modern crossbow sights often use similar technology to modern firearm sights, such as red dot sights and telescopic sights. Many crossbow scopes feature multiple crosshairs to compensate for the significant effects of gravity over different ranges. In most cases, a newly bought crossbow will need to be sighted for accurate shooting. A major cause of the sound of shooting a crossbow is vibration of various components. Crossbow silencers are multiple components placed on high vibration parts, such as the string and limbs, to dampen vibration and suppress the sound of loosing the bolt. History China In terms of archaeological evidence, crossbow locks dated made of cast bronze have been found in China . They have also been found in Tombs 3 and 12 at Qufu, Shandong, previously the capital of Lu, and date to the 6th century BC. Bronze crossbow bolts dating from the mid-5th century BC have been found at a Chu burial site in Yutaishan, Jiangling County, Hubei Province. Other early finds of crossbows were discovered in Tomb 138 at Saobatang, Hunan Province, and date to the mid-4th century BC. It is possible that these early crossbows used spherical pellets for ammunition. A Western Han mathematician and music theorist, Jing Fang (78–37 BC), compared the moon to the shape of a round crossbow bullet. The Zhuangzi also mentions crossbow bullets. The earliest Chinese documents mentioning a crossbow were texts from the 4th to 3rd centuries BC attributed to the followers of Mozi. This source refers to the use of a giant crossbow between the 6th and 5th centuries BC, corresponding to the late Spring and Autumn period. Sun Tzu's The Art of War (first appearance dated between 500 BC to 300 BC) refers to the characteristics and use of crossbows in chapters 5 and 12 respectively, and compares a drawn crossbow to "might". The Huainanzi advises its readers not to use crossbows in marshland where the surface is soft and it is hard to arm the crossbow with the foot. The Records of the Grand Historian, completed in 94 BC, mentions that Sun Bin defeated Pang Juan by ambushing him with a battalion of crossbowmen at the Battle of Maling in 342 BC. The Book of Han, finished 111 AD, lists two military treatises on crossbows. Handheld crossbows with complex bronze trigger mechanisms have also been found with the Terracotta Army in the tomb of Qin Shi Huang (r. 221–210 BC) that are similar to specimens from the subsequent Han dynasty (202 BC–220 AD), while crossbowmen described in the Qin and Han dynasty learned drill formations, some were even mounted as charioteers and cavalry units, and Han dynasty writers attributed the success of numerous battles against the Xiongnu and Western Regions city-states to massed crossbow volleys. The bronze triggers were designed in such a way that they were able to store a large amount of energy within the bow when drawn but was easily shot with little resistance and recoil when the trigger was pulled. The trigger nut also had a long vertical spine that could be used like a primitive rear sight for elevation adjustment, which allowed precision shooting over longer distances. The Qin and Han dynasty-era crossbow was also an early example of a modular design, as the bronze trigger components were also mass-produced with relative precise tolerances so that the parts were interchangeable between different crossbows. The trigger mechanism from one crossbow can be installed into another simply by dropping into a tiller slot of the same specifications and secured with dowel pins. Some crossbow designs were also found to be fitted with bronze buttplates and trigger guard. It is clear from surviving inventory lists in Gansu and Xinjiang that the crossbow was greatly favored by the Han dynasty. For example, in one batch of slips there are only two mentions of bows, but thirty mentions of crossbows. Crossbows were mass-produced in state armories with designs improving as time went on, such as the use of a mulberry wood stock and brass. Such crossbows during the Song Dynasty in 1068 AD could pierce a tree at 140 paces. Crossbows were used in numbers as large as 50,000 starting from the Qin dynasty and upwards of several hundred thousand during the Han. According to one authority, the crossbow had become "nothing less than the standard weapon of the Han armies", by the second century BC. Han soldiers were required to arm a crossbow with a draw weight equivalent of to qualify as an entry-level crossbowman, while it was claimed that a few elite troops were capable of arming crossbows with a draw-weight in excess of by the hands-and-feet method. After the Han dynasty, the crossbow lost favor during the Six Dynasties, until it experienced a mild resurgence during the Tang dynasty, under which the ideal expeditionary army of 20,000 included 2,200 archers and 2,000 crossbowmen. Li Jing and Li Quan prescribed 20 percent of the infantry to be armed with crossbows. During the Song dynasty, the crossbow received a huge upsurge in military usage, and often overshadowed the bow 2 to 1 in numbers. During this time period, a stirrup was added for ease of loading. The Song government attempted to restrict the public use of crossbows and sought ways to keep both body armor and crossbows out of civilian ownership. Despite the ban on certain types of crossbows, the weapon experienced an upsurge in civilian usage as both a hunting weapon and pastime. The "romantic young people from rich families, and others who had nothing particular to do" formed crossbow-shooting clubs as a way to pass time. Military crossbows were armed by treading, or basically placing the feet on the bow stave and drawing it using one's arms and back muscles. During the Song dynasty, stirrups were added for ease of drawing and to mitigate damage to the bow. Alternatively, the bow could also be drawn by a belt claw attached to the waist, but this was done lying down, as was the case for all large crossbows. Winch-drawing was used for the large mounted crossbows as seen below, but evidence for its use in Chinese hand-crossbows is scant. Southeast Asia Around the third century BC, King An Dương of Âu Lạc (modern-day northern Vietnam) and (modern-day southern China) commissioned a man named Cao Lỗ (or Cao Thông) to construct a crossbow and christened it "Saintly Crossbow of the Supernaturally Luminous Golden Claw" (nỏ thần), which could kill 300 men in one shot. According to historian Keith Taylor, the crossbow, along with the word for it, seems to have been introduced into China from Austroasiatic peoples in the south around the fourth century BC. However, this is contradicted by crossbow locks found in ancient Chinese Zhou dynasty tombs dating to the 600s BC. In 315 AD, Nu Wen taught the Chams how to build fortifications and use crossbows. The Chams would later give the Chinese crossbows as presents on at least one occasion. Crossbow technology for crossbows with more than one prod was transferred from the Chinese to Champa, which Champa used in its invasion of the Khmer Empire's Angkor in 1177. When the Chams sacked Angkor they used the Chinese siege crossbow. The Chinese taught the Chams how to use crossbows and mounted archery Crossbows and archery in 1171. The Khmer also had double-bow crossbows mounted on elephants, which Michel Jacq-Hergoualc'h suggests were elements of Cham mercenaries in Jayavarman VII's army. The native Montagnards of Vietnam's Central Highlands were also known to have used crossbows, as both a tool for hunting, and later an effective weapon against the Viet Cong during the Vietnam War. Montagnard fighters armed with crossbows proved a highly valuable asset to the US Special Forces operating in Vietnam, and it was not uncommon for the Green Berets to integrate Montagnard crossbowmen into their strike teams. Ancient Greece The earliest crossbow-like weapons in Europe probably emerged around the late 5th century BC when the gastraphetes, an ancient Greek crossbow, appeared. The name means "belly-bow"; the concave withdrawal rest at one end of the stock was placed against the belly of the operator, and he could press it to withdraw the slider before attaching a string to the trigger and loading the bolt; this could store more energy than Greek bows. The device was described by the Greek author Heron of Alexandria in his Belopoeica ("On Catapult-making"), which draws on an earlier account of his compatriot engineer Ctesibius (fl. 285–222 BC). According to Heron, the gastraphetes was the forerunner of the later catapult, which places its invention some unknown time prior to 399 BC. The gastraphetes was a crossbow mounted on a stock divided into a lower and upper section. The lower was a case fixed to the bow, and the upper was a slider which had the same dimensions as the case. It was used in the Siege of Motya in 397 BC. This was a key Carthaginian stronghold in Sicily, as described in the 1st century AD by Heron of Alexandria in his book Belopoeica. A crossbow machine, the oxybeles was in use from 375 BC to around 340 BC, when the torsion principle replaced the tension crossbow mechanism. Other arrow-shooting machines such as the larger ballista and smaller scorpio from around 338 BC are torsion catapults and are not considered crossbows. Arrow-shooting machines (katapeltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An Athenian inventory from 330 to 329 BC includes catapults bolts with heads and flights. Arrow-shooting machines in action are reported from Philip II's siege of Perinthos in Thrace in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, presumably to house anti-personnel arrow shooters, as in Aigosthena. Ancient Rome The late 4th century author Vegetius, in his De Re Militari, describes arcubalistarii (crossbowmen) working together with archers and artillerymen. However it is disputed whether arcuballistas were crossbows or torsion-powered weapons. The idea that the arcuballista was a crossbow is due to Vegetius referring separately to it and the manuballista, which was torsion powered. Therefore, if the arcuballista was not like the manuballista, it may have been a crossbow. According to Vegetius these were well-known devices and hence he did not describe them in depth. Joseph Needham argues against the existence of Roman crossbowmen: On the other hand Arrian's earlier Ars Tactica, from about 136 AD, also mentions 'missiles shot not from a bow but from a machine' and that this machine was used on horseback while in full gallop. It is presumed that this was a crossbow. The only pictorial evidence of Roman arcuballistas comes from sculptural reliefs in Roman Gaul depicting them in hunting scenes. These are aesthetically similar to both the Greek and Chinese crossbow but it is not clear what kind of release mechanism they used. Archaeological evidence suggests they were similar to the rolling nut mechanism of medieval Europe. Medieval Europe There are essentially no references to the crossbow in Europe from the 5th until the 10th century. There is however a depiction of a crossbow as a hunting weapon on four Pictish stones from early medieval Scotland (6th to 9th centuries): St. Vigeans no. 1, Glenferness, Shandwick, and Meigle. The crossbow reappeared again in 947 as a French weapon during the siege of Senlis and again in 984 at the siege of Verdun. Crossbows were used at the battle of Hastings in 1066, and by the 12th century they had become common battlefield weapons. The earliest extant European crossbow remains were found at Lake Paladru, dated to the 11th century. The crossbow superseded hand bows in many European armies during the 12th century, except in England, where the longbow was more popular. Later crossbows (sometimes referred to as arbalests), utilizing all-steel prods, were able to achieve power close (and sometime superior) to longbows but were more expensive to produce and slower to reload because they required the aid of mechanical devices such as the cranequin or windlass to draw back their extremely heavy bows. Usually these could shoot only two bolts per minute versus twelve or more with a skilled archer, often necessitating the use of a pavise (shield) to protect the operator from enemy fire. Along with polearm weapons made from farming equipment, the crossbow was also a weapon of choice for insurgent peasants such as the Taborites. Genoese crossbowmen were famous mercenaries hired throughout medieval Europe, whilst the crossbow also played an important role in anti-personnel defense of ships. Crossbows were eventually replaced in warfare by gunpowder weapons. Early hand cannons had slower rates of fire and much worse accuracy than contemporary crossbows, but the arquebus (which proliferated in the mid to late 15th century) matched crossbows' rate of fire while being far more powerful. The Battle of Cerignola in 1503 was won by Spain largely through the use of matchlock arquebuses, marking the first time a major battle had been won through the use of hand-held firearms. Later, similar competing tactics would feature harquebusiers or musketeers in formation with pikemen, pitted against cavalry firing pistols or carbines. While the military crossbow had largely been supplanted by firearms on the battlefield by 1525, the sporting crossbow in various forms remained a popular hunting weapon in Europe until the eighteenth century. The accuracy of late 15th century crossbows compares well with modern handguns, based on records of shooting competitions in German cities. Crossbows saw irregular use throughout the rest of the 16th century; for example, Maria Pita's husband was killed by a crossbowman of the English Armada in 1589. Islamic world There are no references to crossbows in Islamic texts earlier than the 14th century. Arabs in general were averse to the crossbow and considered it a foreign weapon. They called it qaus al-rijl (foot-drawn bow), qaus al-zanbūrak (bolt bow) and qaus al-faranjīyah (Frankish bow). Although Muslims did have crossbows, there seems to be a split between eastern and western types. Muslims in Spain used the typical European trigger, while eastern Muslim crossbows had a more complex trigger mechanism. Mamluk cavalry used crossbows. Elsewhere and later Oyumi were ancient Japanese artillery pieces that first appeared in the seventh century (during the Asuka period). According to Japanese records, the Oyumi was different from the handheld crossbow also in use during the same time period. A quote from a seventh-century source seems to suggest that the Oyumi may have able to fire multiple arrows at once: "the Oyumi were lined up and fired at random, the arrows fell like rain". A ninth-century Japanese artisan named Shimaki no Fubito claimed to have improved on a version of the weapon used by the Chinese; his version could rotate and fire projectiles in multiple directions.Hired Swords: The Rise of Private Warrior Power in Early Japan, By Karl Friday, Stanford: Stanford University Press, 1992 p. 42 The last recorded use of the Oyumi was in 1189. In West and Central Africa, crossbows served as a scouting weapon and for hunting, with African slaves bringing this technology to natives in America. In the Southern United States, the crossbow was used for hunting and warfare when firearms or gunpowder were unavailable because of economic hardships or isolation. In the north of Northern America, light hunting crossbows were traditionally used by the Inuit. These are technologically similar to the African-derived crossbows, but have a different route of influence. Spanish conquistadors continued to use crossbows in the Americas long after they were replaced in European battlefields by firearms. Only in the 1570s, did firearms become completely dominant among the Spanish in the Americas. The French and the British used a crossbow-like Sauterelle (French for grasshopper) in World War I. It was lighter and more portable than the Leach Trench Catapult, but less powerful. It weighed and could throw an F1 grenade or Mills bomb . The Sauterelle replaced the Leach Catapult in British service and was in turn replaced in 1916 by the 2-inch Medium Trench Mortar and Stokes mortar. Early in the war, actual crossbows were pressed into service in small numbers by both French and German troops to launch grenades. A range of crossbows were developed by the Allied powers during the Second World War for assassinations and covert operations, but none appear to have ever been used in the field. A small number of crossbows were built and used by Australian forces in the New Guinea campaign. Modern use Hunting, leisure, and science Crossbows are used for shooting sports and bowhunting in modern archery and for blubber biopsy samples in scientific research. In some countries such as Canada, they may be less heavily regulated than firearms, and thus more popular for hunting; some jurisdictions have bow and/or crossbow only seasons. Military and paramilitary Crossbows are no longer used in battles, but they are still used in some military applications. For example, there is an undated photograph of Peruvian soldiers equipped with crossbows and rope to establish a zip-line in difficult terrain. In Brazil, the CIGS (Jungle Warfare Training Center) also trains soldiers in the use of crossbows. In the United States, SAA International Ltd manufacture a crossbow-launched version of the U.S. Army type classified Launched Grapnel Hook (LGH), among other mine countermeasure solutions designed for the Middle Eastern theatre. It was evaluated as successful in Cambodia and Bosnia. It is used to probe for and detonate tripwire-initiated mines and booby traps at up to . The concept is similar to the LGH device originally fired from a rifle, as a plastic retrieval line is attached. Reusable up to 20 times, the line can be reeled back in without exposing the user. The device is of particular use in tactical situations where noise discipline is important. In Europe, Barnett International sold crossbows to Serbian forces which, according to The Guardian, were later used "in ambushes and as a counter-sniper weapon" against the Kosovo Liberation Army during the Kosovo War in the areas of Pec and Djakovica, south west of Kosovo. Whitehall launched an investigation, though the Department of Trade and Industry established that not being "on the military list", crossbows were not covered by export restrictions. Paul Beaver of Jane's Defence Publications commented that, "They are not only a silent killer, they also have a psychological effect". On 15 February 2008, Serbian Minister of Defence Dragan Sutanovac was pictured testing a Barnett crossbow during a public exercise of the Serbian Army's Special Forces in Nis, south of Belgrade. Special forces in both Greece and Turkey also continue to employ the crossbow.Turkish special ops . I96.photobucket.com Spain's Green Berets still use the crossbow as well. In Asia, some Chinese armed forces use crossbows, including the special force Snow Leopard Commando Unit of the People's Armed Police and the People's Liberation Army. One reason for this is the crossbow's ability to stop persons carrying explosives without risk of causing detonation. During the Xinjiang riots of July 2009, Crossbows were used by security forces to suppress rioters. The Indian Navy's Marine Commando Force were equipped until the late 1980s with crossbows with cyanide-tipped bolts, as an alternative to suppressed handguns. Comparison to conventional bows With a crossbow, archers could release a draw force far in excess of what they could have handled with a bow. Furthermore, the crossbow could hold the tension indefinitely, whereas even the strongest longbowman could only hold a drawn bow for a short time. The ease of use of a crossbow allows it to be used effectively with little training, while other types of bows take far more skill to shoot accurately. The disadvantage is the greater weight and clumsiness to reload compared to a bow, as well as the slower rate of shooting and the lower efficiency of the acceleration system, but there would be reduced elastic hysteresis, making the crossbow a more accurate weapon. Medieval European crossbows had a much smaller draw length than bows, so that for the same energy to be imparted to the projectile the crossbow had to have a much higher draw weight. A direct comparison between a fast hand-drawn replica crossbow and a longbow shows a 6:10 rate of shooting or a 4:9 rate within 30 seconds and comparable weapons. Legislation Today, the crossbow often has a complicated legal status due to the possibility of lethal use and its similarities to both firearms and bows. While some jurisdictions treat crossbows in the same way as firearms, many others do not require any sort of license to own a crossbow. The legality of using a crossbow for hunting varies widely in different jurisdictions.
Technology
Projectile weapons
null
6949
https://en.wikipedia.org/wiki/Carbamazepine
Carbamazepine
Carbamazepine, sold under the brand name Tegretol among others, is an anticonvulsant medication used in the treatment of epilepsy and neuropathic pain. It is used as an adjunctive treatment in schizophrenia along with other medications and as a second-line agent in bipolar disorder. Carbamazepine appears to work as well as phenytoin and valproate for focal and generalized seizures. It is not effective for absence or myoclonic seizures. Carbamazepine was discovered in 1953 by Swiss chemist Walter Schindler. It was first marketed in 1962. It is available as a generic medication. It is on the World Health Organization's List of Essential Medicines. In 2020, it was the 185th most commonly prescribed medication in the United States, with more than 2million prescriptions. Photoswitchable analogues of carbamazepine have been developed to control its pharmacological activity locally and on demand using light (photopharmacology), with the purpose of reducing the adverse systemic effects of the drug. One of these light-regulated compounds (carbadiazocine, based on a bridged azobenzene or diazocine) has been shown to produce analgesia with noninvasive illumination in vivo in a rat model of neuropathic pain. Medical uses Carbamazepine is typically used for the treatment of seizure disorders and neuropathic pain. It is used off-label as a second-line treatment for bipolar disorder and in combination with an antipsychotic in some cases of schizophrenia when treatment with a conventional antipsychotic alone has failed. However, evidence does not support its usage for schizophrenia. It is not effective for absence seizures or myoclonic seizures. Although carbamazepine may have a similar effectiveness (as measured by people continuing to use a medication) and efficacy (as measured by the medicine reducing seizure recurrence and improving remission) when compared to phenytoin and valproate, choice of medication should be evaluated on an individual basis as further research is needed to determine which medication is most helpful for people with newly-onset seizures. In the United States, carbamazepine is indicated for the treatment of epilepsy (including partial seizures, generalized tonic-clonic seizures and mixed seizures), and trigeminal neuralgia. Carbamazepine is the only medication that is approved by the Food and Drug Administration for the treatment of trigeminal neuralgia. As of 2014, a controlled release formulation was available for which there is tentative evidence showing fewer side effects and unclear evidence with regard to whether there is a difference in efficacy. It has also been shown to improve symptoms of "typewriter tinnitus", a type of tinnitus caused by the neurovascular compression of the cochleovestibular nerve. Adverse effects In the US, the label for carbamazepine contains warnings concerning: effects on the body's production of red blood cells, white blood cells, and platelets: rarely, there are major effects of aplastic anemia and agranulocytosis reported and more commonly, there are minor changes such as decreased white blood cell or platelet counts, but these do not progress to more serious problems. increased risks of suicide increased risks of hyponatremia and SIADH risk of seizures, if the person stops taking the drug abruptly risks to the fetus in women who are pregnant, specifically congenital malformations like spina bifida, and developmental disorders. Pancreatitis Hepatitis Dizziness Bone marrow suppression Stevens–Johnson syndrome Common adverse effects may include drowsiness, dizziness, headaches and migraines, ataxia, nausea, vomiting, and/or constipation. Alcohol use while taking carbamazepine may lead to enhanced depression of the central nervous system. Less common side effects may include increased risk of seizures in people with mixed seizure disorders, abnormal heart rhythms, blurry or double vision. Also, rare case reports of an auditory side effect have been made, whereby patients perceive sounds about a semitone lower than previously; this unusual side effect is usually not noticed by most people, and disappears after the person stops taking carbamazepine. Pharmacogenetics Serious skin reactions such as Stevens–Johnson syndrome (SJS) or toxic epidermal necrolysis (TEN) due to carbamazepine therapy are more common in people with a particular human leukocyte antigen gene-variant (allele), HLA-B*1502. Odds ratios for the development of SJS or TEN in people who carry the allele can be in the double, triple or even quadruple digits, depending on the population studied. HLA-B*1502 occurs almost exclusively in people with ancestry across broad areas of Asia, but has a very low or absent frequency in European, Japanese, Korean and African populations. However, the HLA-A*31:01 allele has been shown to be a strong predictor of both mild and severe adverse reactions, such as the DRESS form of severe cutaneous reactions, to carbamazepine among Japanese, Chinese, Korean, and Europeans. It is suggested that carbamazepine acts as a potent antigen that binds to the antigen-presenting area of HLA-B*1502 alike, triggering an everlasting activation signal on immature CD8-T cells, thus resulting in widespread cytotoxic reactions like SJS/TEN. Interactions Carbamazepine has a potential for drug interactions. Drugs that decrease breaking down of carbamazepine or otherwise increase its levels include erythromycin, cimetidine, propoxyphene, and calcium channel blockers. Grapefruit juice raises the bioavailability of carbamazepine by inhibiting the enzyme CYP3A4 in the gut wall and in the liver. Lower levels of carbamazepine are seen when administered with phenobarbital, phenytoin, or primidone, which can result in breakthrough seizure activity. Valproic acid and valnoctamide both inhibit microsomal epoxide hydrolase (mEH), the enzyme responsible for the breakdown of the active metabolite carbamazepine-10,11-epoxide into inactive metabolites. By inhibiting mEH, valproic acid and valnoctamide cause a build-up of the active metabolite, prolonging the effects of carbamazepine and delaying its excretion. Carbamazepine, as an inducer of cytochrome P450 enzymes, may increase clearance of many drugs, decreasing their concentration in the blood to subtherapeutic levels and reducing their desired effects. Drugs that are more rapidly metabolized with carbamazepine include warfarin, lamotrigine, phenytoin, theophylline, valproic acid, many benzodiazepines, and methadone. Carbamazepine also increases the metabolism of the hormones in birth control pills and can reduce their effectiveness, potentially leading to unexpected pregnancies. Pharmacology Mechanism of action Carbamazepine is a sodium channel blocker. It binds preferentially to voltage-gated sodium channels in their inactive conformation, which prevents repetitive and sustained firing of an action potential. Carbamazepine has effects on serotonin systems but the relevance to its antiseizure effects is uncertain. There is evidence that it is a serotonin releasing agent and possibly even a serotonin reuptake inhibitor. It has been suggested that carbamazepine can also block voltage-gated calcium channels, which will reduce neurotransmitter release. Pharmacokinetics Carbamazepine is relatively slowly but practically completely absorbed after administration by mouth. Highest concentrations in the blood plasma are reached after 4 to 24 hours depending on the dosage form. Slow release tablets result in about 15% lower absorption and 25% lower peak plasma concentrations than ordinary tablets, as well as in less fluctuation of the concentration, but not in significantly lower minimum concentrations. In the circulation, carbamazepine itself comprises 20 to 30% of total residues. The remainder is in the form of metabolites; 70 to 80% of residues is bound to plasma proteins. Concentrations in breast milk are 25 to 60% of those in the blood plasma. Carbamazepine itself is not pharmacologically active. It is activated, mainly by CYP3A4, to carbamazepine-10,11-epoxide, which is solely responsible for the drug's anticonvulsant effects. The epoxide is then inactivated by microsomal epoxide hydrolase (mEH) to carbamazepine-trans-10,11-diol and further to its glucuronides. Other metabolites include various hydroxyl derivatives and carbamazepine-N-glucuronide. The plasma half-life is about 35 to 40 hours when carbamazepine is given as single dose, but it is a strong inducer of liver enzymes, and the plasma half-life shortens to about 12 to 17 hours when it is given repeatedly. The half-life can be further shortened to 9–10 hours by other enzyme inducers such as phenytoin or phenobarbital. About 70% are excreted via the urine, almost exclusively in form of its metabolites, and 30% via the faeces. History Carbamazepine was discovered by chemist Walter Schindler at J.R. Geigy AG (now part of Novartis) in Basel, Switzerland, in 1953. It was first marketed as a drug to treat epilepsy in Switzerland in 1963 under the brand name Tegretol; its use for trigeminal neuralgia (formerly known as tic douloureux) was introduced at the same time. It has been used as an anticonvulsant and antiepileptic in the United Kingdom since 1965, and has been approved in the United States since 1968. Carbamazepine was studied for bipolar disorder throughout the 1970s. Society and culture Environmental impact Carbamazepine and its bio-transformation products have been detected in wastewater treatment plant effluent and in streams receiving treated wastewater. Field and laboratory studies have been conducted to understand the accumulation of carbamazepine in food plants grown in soil treated with sludge, which vary with respect to the concentrations of carbamazepine present in sludge and in the concentrations of sludge in the soil. Taking into account only studies that used concentrations commonly found in the environment, a 2014 review concluded that "the accumulation of carbamazepine into plants grown in soil amended with biosolids poses a de minimis risk to human health according to the approach." Brand names Carbamazepine is available worldwide under many brand names including Tegretol. Research
Biology and health sciences
Specific drugs
Health
6956
https://en.wikipedia.org/wiki/Conservation%20law
Conservation law
In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of mass-energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all. A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the "transport" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume. From Noether's theorem, every differentiable symmetry leads to a conservation law. Other conserved quantities can exist as well. Conservation laws as fundamental laws of nature Conservation laws are fundamental to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge. Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering. Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others. One particularly important result concerning conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of the Universe. For example, the conservation of energy follows from the uniformity of time and the conservation of angular momentum arises from the isotropy of space, i.e. because there is no preferred direction of space. Notably, there is no conservation law associated with time-reversal, although more complex conservation laws combining time-reversal with other symmetries are known. Exact laws A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely have never been proven to be violated: Another exact symmetry is CPT symmetry, the simultaneous inversion of space and time coordinates, together with swapping all particles with their antiparticles; however being a discrete symmetry Noether's theorem does not apply to it. Accordingly, the conserved quantity, CPT parity, can usually not be meaningfully calculated or determined. Approximate laws There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions. Conservation of mechanical energy Conservation of mass (approximately true for nonrelativistic speeds) Conservation of baryon number (See chiral anomaly and sphaleron) Conservation of lepton number (In the Standard Model) Conservation of flavor (violated by the weak interaction) Conservation of strangeness (violated by the weak interaction) Conservation of space-parity (violated by the weak interaction) Conservation of charge-parity (violated by the weak interaction) Conservation of time-parity (violated by the weak interaction) Conservation of CP parity (violated by the weak interaction); in the Standard Model, this is equivalent to conservation of time-parity. Global and local conservation laws The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point A and simultaneously disappear from another separate point B. For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at A and disappearance of the energy at B are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at A will appear before or after the energy at B disappears. In both cases, during the interval energy will not be conserved. A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or flux of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous local changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a local conservation law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a continuity equation, which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general. Differential forms In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge is where is the divergence operator, is the density of (amount per unit volume), is the flux of (amount crossing a unit area in unit time), and is time. If we assume that the motion u of the charge is a continuous function of position and time, then In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation: where the dependent variable is called the density of a conserved quantity, and is called the current Jacobian, and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case: is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable is called a nonconserved quantity, and the inhomogeneous term is the-source, or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system. In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the advection form: where the dependent variable is called the density of the conserved (scalar) quantity, and is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity : In this case since the chain rule applies: the conservation equation can be put into the current density form: In a space with more than one dimension the former definition can be extended to an equation that can be put into the form: where the conserved quantity is , denotes the scalar product, is the nabla operator, here indicating a gradient, and is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity : This is the case for the continuity equation: Here the conserved quantity is the mass, with density and current density , identical to the momentum density, while is the flow velocity. In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form: where is called the conserved (vector) quantity, is its gradient, is the zero vector, and is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix : and the conservation equation can be put into the form: For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are: where: is the flow velocity vector, with components in a N-dimensional space , is the specific pressure (pressure per unit density) giving the source term, It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively: where denotes the outer product. Integral and weak forms Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions. By integrating in any space-time domain the current density form in 1-D space: and by using Green's theorem, the integral form is: In a similar fashion, for the scalar multidimensional space, the integral form is: where the line integration is performed along the boundary of the domain, in an anticlockwise manner. Moreover, by defining a test function φ(r,t) continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is: In the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives.
Physical sciences
Physics basics: General
Physics
6966
https://en.wikipedia.org/wiki/Chinese%20calendar
Chinese calendar
The traditional Chinese calendar, dating back to the Han dynasty, is a lunisolar calendar that blends solar, lunar, and other cycles for social and agricultural purposes. While modern China primarily uses the Gregorian calendar for official purposes, the traditional calendar remains culturally significant. It determines the timing of Chinese New Year with traditions like the twelve animals of the Chinese Zodiac still widely observed. The traditional Chinese calendar uses the sexagenary cycle, a repeating system of Heavenly Stems and Earthly Branches, to mark years, months, and days. This system, along with astronomical observations and mathematical calculations, was developed to align solar and lunar cycles, though some approximations are necessary due to the natural differences between these cycles. Over centuries, the calendar was refined through advancements in astronomy and horology, with dynasties introducing variations to improve accuracy and meet cultural or political needs. While the Gregorian calendar has become now standard for civic daily use in China, the traditional lunisolar calendar continues to influence festivals, cultural practices, and zodiac-based customs. Beyond China, it has shaped other East Asian calendars, including the Korean, Vietnamese, and Japanese lunar systems, each adapting the same lunisolar principles while integrating local customs and terminology. Epochs, or fixed starting points for year counting, have played an essential role in the Chinese calendar's structure. Some epochs are based on historical figures, such as the inauguration of the Yellow Emperor (Huangdi), while others marked the rise of dynasties or significant political shifts. This system allowed for the numbering of years based on regnal eras, with the start of a ruler's reign often resetting the count. The Chinese calendar also tracks time in smaller units, including months, days, and double-hour periods called shichen. These timekeeping methods have influenced broader fields of horology, with some principles, such as precise time subdivisions, still evident in modern scientific timekeeping. The continued use of the calendar today highlights its enduring cultural, historical, and scientific significance. Etymology The name of calendar is in , and was represented in earlier character forms variants (), and ultimately derived from an ancient form (秝). The ancient form of the character consists of two stalks of rice plant (), arranged in parallel. This character represents the order in space and also the order in time. As its meaning became complex, the modern dedicated character () was created to represent the meaning of calendar. Maintaining the correctness of calendars was an important task to maintain the authority of rulers, being perceived as a way to measure the ability of a ruler. For example, someone seen as a competent ruler would foresee the coming of seasons and prepare accordingly. This understanding was also relevant in predicting abnormalities of the Earth and celestial bodies, such as lunar and solar eclipses. The significant relationship between authority and timekeeping helps to explain why there are 102 calendars in Chinese history, trying to predict the correct courses of sun, moon and stars, and marking good time and bad time. Each calendar is named as and recorded in a dedicated calendar section in history books of different eras. The last one in imperial era was . A ruler would issue an almanac before the commencement of each year. There were private almanac issuers, usually illegal, when a ruler lost his control to some territories. Various modern Chinese calendar names resulted from the struggle between the introduction of Gregorian calendar by government and the preservation of customs by the public in the era of Republic of China. The government wanted to abolish the Chinese calendar to force everyone to use the Gregorian calendar, and even abolished the Lunar New Year, but faced great opposition. The public needed the astronomical Chinese calendar to do things at a proper time, for example farming and fishing; also, a wide spectrum of festivals and customs observations have been based on the calendar. The government finally compromised and rebranded it as the agricultural calendar in 1947, depreciating the calendar to merely agricultural use. Epochs An epoch is a point in time chosen as the origin of a particular calendar era, thus serving as a reference point from which subsequent time or dates are measured. The use of epochs in Chinese calendar system allow for a chronological starting point from whence to begin point continuously numbering subsequent dates. Various epochs have been used. Similarly, nomenclature similar to that of the Christian era has occasionally been used: No reference date is universally accepted. The most popular is the Gregorian calendar (). During the 17th century, the Jesuit missionaries tried to determine the epochal year of the Chinese calendar. In his Sinicae historiae decas prima (published in Munich in 1658), Martino Martini (1614–1661) dated the Yellow Emperor's ascension at 2697 BCE and began the Chinese calendar with the reign of Fuxi (which, according to Martini, began in 2952 BCE). Philippe Couplet's 1686 Chronological table of Chinese monarchs (Tabula chronologica monarchiae sinicae) gave the same date for the Yellow Emperor. The Jesuits' dates provoked interest in Europe, where they were used for comparison with Biblical chronology. Modern Chinese chronology has generally accepted Martini's dates, except that it usually places the reign of the Yellow Emperor at 2698 BCE and omits his predecessors Fuxi and Shennong as "too legendary to include". Publications began using the estimated birth date of the Yellow Emperor as the first year of the Han calendar in 1903, with newspapers and magazines proposing different dates. Jiangsu province counted 1905 as the year 4396 (using a year 1 of 2491 BCE, and implying that CE is ), and the newspaper Ming Pao () reckoned 1905 as 4603 (using a year 1 of 2698 BCE, and implying that CE is ). Liu Shipei (, 1884–1919) created the Yellow Emperor Calendar (), with year 1 as the birth of the emperor (which he determined as 2711 BCE, implying that CE is ). There is no evidence that this calendar was used before the 20th century. Liu calculated that the 1900 international expedition sent by the Eight-Nation Alliance to suppress the Boxer Rebellion entered Beijing in the 4611th year of the Yellow Emperor. Taoists later adopted Yellow Emperor Calendar and named it Tao Calendar (). On 2 January 1912, Sun Yat-sen announced changes to the official calendar and era. 1 January was 14 Shíyīyuè 4609 Huángdì year, assuming a year 1 of 2698 BCE, making CE year . Many overseas Chinese communities like San Francisco's Chinatown adopted the change. The modern Chinese standard calendar uses the epoch of the Gregorian calendar, which is on 1 January of the year 1 CE. Calendar types Lunisolar Lunisolar calendars involve correlations of the cycles of the sun (solar) and the moon (lunar). Solar and agricultural A solar calendar (also called the Tung Shing, the Yellow Calendar or Imperial Calendar, both alluding to Yellow Emperor) keeps track of the seasons as the earth and the sun move in the solar system relatively to each other. A purely solar calendar may be useful in planning times for agricultural activities such as planting and harvesting. Solar calendars tend to use astronomically observable points of reference such as equinoxes and solstices, events which may be approximately predicted using fundamental methods of observation and basic mathematical analysis. Modern Chinese calendar and horology The topic of the Chinese calendar also includes variations of the modern Chinese calendar, influenced by the Gregorian calendar. Variations include methodologies of the People's Republic of China and Taiwan. Modern calendars In China, the modern calendar is defined by the Chinese national standard GB/T 33661–2017, "Calculation and Promulgation of the Chinese Calendar", issued by the Standardization Administration of China on 12 May 2017. Influence of Gregorian calendar Although modern-day China uses the Gregorian calendar, the traditional Chinese calendar governs holidays, such as the Chinese New Year and Lantern Festival, in both China and overseas Chinese communities. It also provides the traditional Chinese nomenclature of dates within a year which people use to select auspicious days for weddings, funerals, moving or starting a business. The evening state-run news program Xinwen Lianbo in the People's Republic of China continues to announce the months and dates in both the Gregorian and the traditional lunisolar calendar. History The Chinese calendar system has a long history, which has traditionally been associated with specific dynastic periods. Various individual calendar types have been developed with different names. In terms of historical development, some of the calendar variations are associated with dynastic changes along a spectrum beginning with a prehistorical/mythological time to and through well attested historical dynastic periods. Many individuals have been associated with the development of the Chinese calendar, including researchers into underlying astronomy; and, furthermore, the development of instruments of observation are historically important. Influences from India, Islam, and Jesuits also became significant. Phenology Early calendar systems often were closely tied to natural phenomena. Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation). The plum-rains season (), the rainy season in late spring and early summer, begins on the first bǐng day after Mangzhong () and ends on the first wèi day after Xiaoshu (). The Three Fu () are three periods of hot weather, counted from the first gēng day after the summer solstice. The first fu () is 10 days long. The mid-fu () is 10 or 20 days long. The last fu () is 10 days from the first gēng day after the beginning of autumn. The Shujiu cold days () are the 81 days after the winter solstice (divided into nine sets of nine days), and are considered the coldest days of the year. Each nine-day unit is known by its order in the set, followed by "nine" (). In traditional Chinese culture, "nine" represents the infinity, which is also the number of "Yang". According to one belief nine times accumulation of "Yang" gradually reduces the "Yin", and finally the weather becomes warm. Names of months Lunar months were originally named according to natural phenomena. Current naming conventions use numbers as the month names. Every month is also associated with one of the twelve Earthly Branches. Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Chinese astronomy The Chinese calendar has been a development involving much observation and calculation of the apparent movements of the Sun, Moon, planets, and stars, as observed from Earth. Chinese astronomers Many Chinese astronomers have contributed to the development of the Chinese calendar. Many were of the scholarly or shi class (), including writers of history, such as Sima Qian. Notable Chinese astronomers who have contributed to the development of the calendar include Gan De, Shi Shen, and Zu Chongzhi Technology Early technological developments aiding in calendar development include the development of the gnomon. Later technological developments useful to the calendar system include naming, numbering and mapping of the sky, the development of analog computational devices such as the armillary sphere and the water clock, and the establishment of observatories. Chinese calendar names Ancient six calendars From the Warring States period (ending in 221 BCE), six especially significant calendar systems are known to have begun to be developed. Later on, during their future course in history, the modern names for the ancient six calendars were also developed, and can be translated into English as Huangdi, Yin, Zhou, Xia, Zhuanxu, and Lu. Calendar variations There are various Chinese terms for calendar variations including: Nongli Calendar (traditional Chinese: 農曆; simplified Chinese: 农历; pinyin: nónglì; lit. 'agricultural calendar') Jiuli Calendar (traditional Chinese: 舊曆; simplified Chinese: 旧历; pinyin: jiùlì; Jyutping: Gau6 Lik6; lit.'former calendar') Laoli Calendar (traditional Chinese: 老曆; simplified Chinese: 老历; pinyin: lǎolì; lit. 'old calendar') Zhongli Calendar (traditional Chinese: 中曆; simplified Chinese: 中历; pinyin: zhōnglì; Jyutping: zung1 lik6; lit. 'Chinese calendar') Huali Calendar (traditional Chinese: 華曆; simplified Chinese: 华历; pinyin: huálì; Jyutping: waa4 lik6; lit. 'Chinese calendar') Solar calendars The traditional Chinese calendar was developed between 771 BCE and 476 BCE, during the Spring and Autumn period of the Eastern Zhou dynasty. Solar calendars were used before the Zhou dynasty period, along with the basic sexagenary system. Five-elements calendar One version of the solar calendar is the five-elements calendar (), which derives from the Wu Xing. A 365-day year was divided into five phases of 73 days, with each phase corresponding to a Day 1 Wu Xing element. A phase began with a governing-element day (), followed by six 12-day weeks. Each phase consisted of two three-week months, making each year ten months long. Years began on a jiǎzǐ () day (and a 72-day wood phase), followed by a bǐngzǐ day () and a 72-day fire phase; a wùzǐ () day and a 72-day earth phase; a gēngzǐ () day and a 72-day metal phase, and a rénzǐ day () followed by a water phase. Other days were tracked using the Yellow River Map (He Tu). Four-quarters calendar Another version is a four-quarters calendar (, or ). The weeks were ten days long, with one month consisting of three weeks. A year had 12 months, with a ten-day week intercalated in summer as needed to keep up with the tropical year. The 10 Heavenly Stems and 12 Earthly Branches were used to mark days. Balanced calendar A third version is the balanced calendar (). A year was 365.25 days, and a month was 29.5 days. After every 16th month, a half-month was intercalated. According to oracle bone records, the Shang dynasty calendar ( BCE) was a balanced calendar with 12 to 14 months in a year; the month after the winter solstice was Zhēngyuè. Lunisolar calendars by dynasty Six ancient calendars Modern historical knowledge and records are limited for the earlier calendars. These calendars are known as the six ancient calendars (), or quarter-remainder calendars, (), since all calculate a year as days long. Months begin on the day of the new moon, and a year has 12 or 13 months. Intercalary months (a 13th month) are added to the end of the year. The Qiang and Dai calendars are modern versions of the Zhuanxu calendar, used by mountain peoples. Zhou dynasty The first lunisolar calendar was the Zhou calendar (), introduced under the Zhou dynasty (1046 BCE – 256 BCE). This calendar sets the beginning of the year at the day of the new moon before the winter solstice. Competing Warring states calendars Several competing lunisolar calendars were also introduced as Zhou devolved into the Warring States, especially by states fighting Zhou control during the Warring States period (perhaps 475 BCE - 221 BCE). The state of Lu issued its own Lu calendar(). Jin issued the Xia calendar () with a year beginning on the day of the new moon nearest the March equinox. Qin issued the Zhuanxu calendar (), with a year beginning on the day of the new moon nearest the winter solstice. Song's Yin calendar () began its year on the day of the new moon after the winter solstice. Qin and early Han dynasties After Qin Shi Huang unified China under the Qin dynasty in 221 BCE, the Qin calendar () was introduced. It followed most of the rules governing the Zhuanxu calendar, but the month order was that of the Xia calendar; the year began with month 10 and ended with month 9, analogous to a Gregorian calendar beginning in October and ending in September. The intercalary month, known as the second Jiǔyuè (), was placed at the end of the year. The Qin calendar was used going into the Han dynasty. Han dynasty Tàichū calendar Emperor Wu of Han introduced reforms in the seventh of the eleven named eras of his reign, Tàichū (), 104 BCE – 101 BCE. His Tàichū Calendar () defined a solar year as days (365;06:00:14.035), and the lunar month had days (29;12:44:44.444). Since the 19 years cycle used for the 7 additional months was taken as an exact one, and not as an approximation. This calendar introduced the 24 solar terms, dividing the year into 24 equal parts of 15° each. Solar terms were paired, with the 12 combined periods known as climate terms. The first solar term of the period was known as a pre-climate (节气), and the second was a mid-climate (中气). Months were named for the mid-climate to which they were closest, and a month without a mid-climate was an intercalary month. The Taichu calendar established a framework for traditional calendars, with later calendars adding to the basic formula. Northern and Southern Dynasties Dàmíng calendar The Dàmíng Calendar (), created in the Northern and Southern Dynasties by Zu Chongzhi (429 CE – 500 CE), introduced the equinoxes. Tang dynasty Wùyín Yuán calendar The use of syzygy to determine the lunar month was first described in the Tang dynasty Wùyín Yuán Calendar (). Yuan dynasty Shòushí calendar The Yuan dynasty Shòushí calendar () used spherical trigonometry to find the length of the tropical year. The calendar had a 365.2425-day year, identical to the Gregorian calendar. Shíxiàn calendar From 1645 to 1913 the Shíxiàn or Chongzhen was developed. During the late Ming dynasty, the Chinese Emperor appointed Xu Guangqi in 1629 to be the leader of the ShiXian calendar reform. Assisted by Jesuits, he translated Western astronomical works and introduced new concepts, such as those of Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Tycho Brahe; however, the new calendar was not released before the end of the dynasty. In the early Qing dynasty, Johann Adam Schall von Bell submitted the calendar which was edited by the lead of Xu Guangqi to the Shunzhi Emperor. The Qing government issued it as the Shíxiàn (seasonal) calendar. In this calendar, the solar terms are 15° each along the ecliptic and it can be used as a solar calendar. However, the length of the climate term near the perihelion is less than 30 days and there may be two mid-climate terms. The Shíxiàn calendar changed the mid-climate-term rule to "decide the month in sequence, except the intercalary month." The present traditional calendar follows the Shíxiàn calendar, except: The baseline is Chinese Standard Time, rather than Beijing local time. (Modern) astronomical data, rather than mathematical calculations, is used. Republic of China The Chinese calendar lost its place as the country's official calendar at the beginning of the 20th century, its use has continued. The Republic of China Calendar published by the Beiyang government of the Republic of China still listed the dates of the Chinese calendar in addition to the Gregorian calendar. In 1929, the Nationalist government tried to ban the traditional Chinese calendar. The Kuómín Calendar published by the government no longer listed the dates of the Chinese calendar. However, Chinese people were used to the traditional calendar and many traditional customs were based on the Chinese calendar. The ban failed and was lifted in 1934. The latest Chinese calendar was "New Edition of Wànniánlì, revised edition", edited by Beijing Purple Mountain Observatory, People's Republic of China. To optimize the Chinese calendar, astronomers have proposed a number of changes. Kao Ping-tse (; 1888–1970), a Chinese astronomer who co-founded the Purple Mountain Observatory, proposed that month numbers be calculated before the new moon and solar terms to be rounded to the day. Since the intercalary month is determined by the first month without a mid-climate and the mid-climate time varies by time zone, countries that adopted the calendar but calculate with their own time could vary from the time in China. Horology Horology, or chronometry, refers to the measurement of time. In the context of the Chinese calendar, horology involves the definition and mathematical measurement of terms or elements such observable astronomic movements or events such as are associated with days, months, years, hours, and so on. These measurements are based upon objective, observable phenomena. Calendar accuracy is based upon accuracy and precision of measurements. The Chinese calendar is lunisolar, similar to the Hindu, Hebrew and ancient Babylonian calendars. In this case the calendar is in part based in objective, observable phenomena and in part by mathematical analysis to correlate the observed phenomena. Lunisolar calendars especially attempt to correlate the solar and lunar cycles, but other considerations can be agricultural and seasonal or phenological, or religious, or even political. Basic horologic definitions include that days begin and end at midnight, and months begin on the day of the new moon. Years start on the second (or third) new moon after the winter solstice. Solar terms govern the beginning, middle, and end of each month. A sexagenary cycle, comprising the heavenly stems () and the earthly branches (), is used as identification alongside each year and month, including intercalary months or leap months. Months are also annotated as either long ( for months with 30 days) or short ( for months with 29 days). There are also other elements of the traditional Chinese calendar. Day Days are Sun oriented, based upon divisions of the solar year. A day () is considered both traditionally and currently to be the time from one midnight to the next. Traditionally days (including the night-time portion) were divided into 12 double-hours, and in modern times the 24 hour system has become more standard. Month Months are Moon oriented. Month (), the time from one new moon to the next. These synodic months are about days long. This includes the Date (), when a day occurs in the month. Days are numbered in sequence from 1 to 29 (or 30). And, a Calendar month (), is when a month occurs within a year. Some months may be repeated. Year A year () is based upon the time of one revolution of Earth around the Sun, rounded to whole days. Traditionally, the year is measured from the first day of spring (lunisolar year) or the winter solstice (solar year). A year is astronomically about days. This includes the calendar () year, when it is authoritatively determined on which day one year ends and another begins. The year usually begins on the new moon closest to Lichun, the first day of spring. This is typically the second and sometimes third new moon after the winter solstice. A calendar year is 353–355 or 383–385 days long. Also includes Zodiac, year, or 30° on the ecliptic. A zodiacal year is about days. Solar terms Solar term (), year, or 15° on the ecliptic. A solar term is about days. Planets The movements of the Sun, Moon, Mercury, Venus, Mars, Jupiter and Saturn (sometimes known as the seven luminaries) are the references for calendar calculations. The distance between Mercury and the sun is less than 30° (the sun's height at chénshí:, 8:00 to 10:00 am), so Mercury was sometimes called the "chen star" (); it is more commonly known as the "water star" (). Venus appears at dawn and dusk and is known as the "bright star" () or "long star" (). Mars looks like fire and occurs irregularly, and is known as the "fire star" ( or ). Mars is the punisher in Chinese mythology. When Mars is near Antares (), it is a bad omen and can forecast an emperor's death or a chancellor's removal (). Jupiter's revolution period is 11.86 years, so Jupiter is called the "age star" (); 30° of Jupiter's revolution is about a year on earth. Saturn's revolution period is about 28 years. Known as the "guard star" (), Saturn guards one of the 28 Mansions every year. Stars Big Dipper The Big Dipper is the celestial compass, and its handle's direction indicates or some said determines the season and month. 3 Enclosures and 28 Mansions The stars are divided into Three Enclosures and 28 Mansions according to their location in the sky relative to Ursa Minor, at the center. Each mansion is named with a character describing the shape of its principal asterism. The Three Enclosures are Purple Forbidden, (), Supreme Palace (), and Heavenly Market. () The eastern mansions are , , , , , , . Southern mansions are , , , , , , . Western mansions are , , , , , , . Northern mansions are , , , , , , . The moon moves through about one lunar mansion per day, so the 28 mansions were also used to count days. In the Tang dynasty, Yuan Tiangang () matched the 28 mansions, seven luminaries and yearly animal signs to yield combinations such as "horn-wood-flood dragon" (). List of lunar mansions The names and determinative stars of the mansions are: Descriptive mathematics Several coding systems are used to avoid ambiguity. The Heavenly Stems is a decimal system. The Earthly Branches, a duodecimal system, mark dual hours ( or ) and climatic terms. The 12 characters progress from the first day with the same branch as the month (first Yín day () of Zhēngyuè; first Mǎo day () of Èryuè), and count the days of the month. The stem-branches is a sexagesimal system. The Heavenly Stems and Earthly Branches make up 60 stem-branches. The stem branches mark days and years. The five Wu Xing elements are assigned to each stem, branch, or stem branch. Sexagenary system Twelve branches Day China has used the Western hour-minute-second system to divide the day since the Qing dynasty. Several era-dependent systems had been in use; systems using multiples of twelve and ten were popular, since they could be easily counted and aligned with the Heavenly Stems and Earthly Branches. Week As early as the Bronze Age Xia dynasty, days were grouped into nine- or ten-day weeks known as xún (). Months consisted of three xún. The first 10 days were the early xún (), the middle 10 the mid xún (), and the last nine (or 10) days were the late xún (). Japan adopted this pattern, with 10-day-weeks known as . In Korea, they were known as sun (,). The structure of xún led to public holidays every five or ten days. Officials of the Han dynasty were legally required to rest every five days (twice a xún, or 5–6 times a month). The name of these breaks became huan (, "wash"). Grouping days into sets of ten is still used today in referring to specific natural events. "Three Fu" (), a 29–30-day period which is the hottest of the year, reflects its three-xún length. After the winter solstice, nine sets of nine days were counted to calculate the end of winter. The seven-day week was adopted from the Hellenistic system by the 4th century CE, although its method of transmission into China is unclear. It was again transmitted to China in the 8th century by Manichaeans via Kangju (a Central Asian kingdom near Samarkand), and is the most-used system in modern China. Month Months are defined by the time between new moons, which averages approximately days. There is no specified length of any particular Chinese month, so the first month could have 29 days (short month, ) in some years and 30 days (long month, ) in other years. A 12-month-year using this system has 354 days, which would drift significantly from the tropical year. To fix this, traditional Chinese years have a 13-month year approximately once every three years. The 13-month version has the same long and short months alternating, but adds a 30-day leap month (). Years with 12 months are called common years, and 13-month years are known as long years. Although most of the above rules were used until the Tang dynasty, different eras used different systems to keep lunar and solar years aligned. The synodic month of the Taichu calendar was days long. The 7th-century, Tang-dynasty Wùyín Yuán Calendar was the first to determine month length by synodic month instead of the cycling method. Since then, month lengths have primarily been determined by observation and prediction. The days of the month are always written with two characters and numbered beginning with 1. Days one to 10 are written with the day's numeral, preceded by the character Chū (); Chūyī () is the first day of the month, and Chūshí () the 10th. Days 11 to 20 are written as regular Chinese numerals; Shíwǔ () is the 15th day of the month, and Èrshí () the 20th. Days 21 to 29 are written with the character Niàn () before the characters one through nine; Niànsān (), for example, is the 23rd day of the month. Day 30 (when applicable) is written as the numeral Sānshí (). History books use days of the month numbered with the 60 stem-branches: Because astronomical observation determines month length, dates on the calendar correspond to moon phases. The first day of each month is the new moon. On the seventh or eighth day of each month, the first-quarter moon is visible in the afternoon and early evening. On the 15th or 16th day of each month, the full moon is visible all night. On the 22nd or 23rd day of each month, the last-quarter moon is visible late at night and in the morning. Since the beginning of the month is determined by when the new moon occurs, other countries using this calendar use their own time standards to calculate it; this results in deviations. The first new moon in 1968 was at 16:29 UTC on 29 January. Since North Vietnam used UTC+07:00 to calculate their Vietnamese calendar and South Vietnam used UTC+08:00 (Beijing time) to calculate theirs, North Vietnam began the Tết holiday at 29 January at 23:29 while South Vietnam began it on 30 January at 00:15. The time difference allowed asynchronous attacks in the Tet Offensive. Names of months and lunar date conventions Current naming conventions use numbers as the month names, although Lunar months were originally named according to natural phenomena phenology. Each month is also associated with one of the twelve Earthly Branches. Correspondences with Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Incorrect: The Dragon Boat Festival falls on 5 May in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on 9 September, 15 January, and 7 July in the Lunar Calendar, respectively. Correct: The Dragon Boat Festival falls on Wǔyuè 5th (or, 5th day of the fifth month) in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival and Qixi Festival fall on Jiǔyuè 9th (or, 9th day of the ninth month), Zhēngyuè 15th (or, 15th day of the first month) and Qīyuè 7th (or, 7th day of the seventh month) in the Lunar Calendar, respectively. Alternate Chinese Zodiac correction: The Dragon Boat Festival falls on Horse Month 5th in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival and Qixi Festival fall on Dog Month 9th, Tiger Month 15th and Monkey Month 7th in the Lunar Calendar, respectively. One may identify the heavenly stem and earthly branch corresponding to a particular day in the month, and those corresponding to its month, and those to its year, to determine the Four Pillars of Destiny associated with it, for which the Tung Shing, also referred to as the Chinese Almanac of the year, or the Huangli, and containing the essential information concerning Chinese astrology, is the most convenient publication to consult. Days rotate through a sexagenary cycle marked by coordination between heavenly stems and earthly branches, hence the referral to the Four Pillars of Destiny as, "Bazi", or "Birth Time Eight Characters", with each pillar consisting of a character for its corresponding heavenly stem, and another for its earthly branch. Since Huangli days are sexagenaric, their order is quite independent of their numeric order in each month, and of their numeric order within a week (referred to as True Animals in relation to the Chinese zodiac). Therefore, it does require painstaking calculation for one to arrive at the Four Pillars of Destiny of a particular given date, which rarely outpaces the convenience of simply consulting the Huangli by looking up its Gregorian date. Solar term The solar year (), the time between winter solstices, is divided into 24 solar terms known as jié qì (節氣). Each term is a 15° portion of the ecliptic. These solar terms mark both Western and Chinese seasons, as well as equinoxes, solstices, and other Chinese events. The even solar terms (marked with "Z", for , Zhongqi) are considered the major terms, while the odd solar terms (marked with "J", for , Jieqi) are deemed minor. The solar terms qīng míng (清明) on 5 April and dōng zhì (冬至) on 22 December are both celebrated events in China. Solar year The calendar solar year, known as the suì, () begins on the December solstice and proceeds through the 24 solar terms. Since the speed of the Sun's apparent motion in the elliptical is variable, the time between major solar terms is not fixed. This variation in time between major solar terms results in different solar year lengths. There are generally 11 or 12 complete months, plus two incomplete months around the winter solstice, in a solar year. The complete months are numbered from 0 to 10, and the incomplete months are considered the 11th month. If there are 12 complete months in the solar year, it is known as a leap solar year, or leap suì. Due to the inconsistencies in the length of the solar year, different versions of the traditional calendar might have different average solar year lengths. For example, one solar year of the 1st century BCE Tàichū calendar is (365.25016) days. A solar year of the 13th-century Shòushí calendar is (365.2425) days, identical to the Gregorian calendar. The additional .00766 day from the Tàichū calendar leads to a one-day shift every 130.5 years. Pairs of solar terms are climate terms, or solar months. The first solar term is "pre-climate" (), and the second is "mid-climate" (). If there are 12 complete months within a solar year, the first month without a mid-climate is the leap, or intercalary, month. In other words, the first month that does not include a major solar term is the leap month. Leap months are numbered with rùn , the character for "intercalary", plus the name of the month they follow. In 2017, the intercalary month after month six was called Rùn Liùyuè, or "intercalary sixth month" () and written as 6i or 6+. The next intercalary month (in 2020, after month four) will be called Rùn Sìyuè () and written 4i or 4+. Lunisolar year The lunisolar year begins with the first spring month, Zhēngyuè (), and ends with the last winter month, Làyuè (). All other months are named for their number in the month order. See below on the timing of the Chinese New Year. Years were traditionally numbered by the reign in ancient China, but this was abolished after founding the People's Republic of China in 1949. For example, the year from 12 February 2021 to 31 January 2022 was a Xīnchǒu year () of 12 months or 354 days. The Tang dynasty used the Earthly Branches to mark the months from December 761 to May 762. Over this period, the year began with the winter solstice. Age reckoning In modern China, a person's official age is based on the Gregorian calendar. For traditional use, age is based on the Chinese Sui calendar. A child is considered one year old at birth. After each Chinese New Year, one year is added to their traditional age. Their age therefore is the number of Chinese calendar years in which they have lived. Due to the potential for confusion, the age of infants is often given in months instead of years. After the Gregorian calendar was introduced in China, the Chinese traditional-age was referred to as the "nominal age" () and the Gregorian age was known as the "real age" (). Year-numbering systems Eras Ancient China numbered years from an emperor's ascension to the throne or his declaration of a new era name. The first recorded reign title was Jiànyuán (), from 140 BCE; the last reign title was Xuāntǒng (), from 1908 CE. The era system was abolished in 1912, after which the current or Republican era was used. Stem-branches The 60 stem-branches have been used to mark the date since the Shang dynasty (1600 BCE – 1046 BCE). Astrologers knew that the orbital period of Jupiter is about 12×361 = 4332 days, which they divided period into 12 years () of 361 days each. The stem-branches system solved the era system's problem of unequal reign lengths. Chinese New Year The date of the Chinese New Year accords with the patterns of the lunisolar calendar and hence is variable from year to year. The invariant between years is that the winter solstice, Dongzhi is required to be in the eleventh month of the year This means that Chinese New Year will be on the second new moon after the previous winter solstice, unless there is a leap month 11 or 12 in the previous year. This rule is accurate, however there are two other mostly (but not completely) accurate rules that are commonly stated: The new year is on the new moon closest to Lichun (typically 4 February). The new year is on the first new moon after Dahan (typically 20 January) It has been found that Chinese New Year moves back by either 10, 11, or 12 days in most years. If it falls on or before 31 January, then it moves forward in the next year by either 18, 19, or 20 days. Chinese lunar date conventions Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Holidays Various traditional and religious holidays shared by communities throughout the world use the Chinese (Lunisolar) calendar: Holidays with the same day and same month The Chinese New Year (known as the Spring Festival/春節 in China) is on the first day of the first month and was traditionally called the Yuan Dan (元旦) or Zheng Ri (正日). In Vietnam it is known as Tết Nguyên Đán (). Traditionally it was the most important holiday of the year. It is an official holiday in China, Hong Kong, Macau, Taiwan, Vietnam, Korea, the Philippines, Malaysia, Singapore, Indonesia, and Mauritius. It is also a public holiday in Thailand's Narathiwat, Pattani, Yala and Satun provinces, and is an official public school holiday in New York City. The Double Third Festival is on the third day of the third month. The Dragon Boat Festival, or the Duanwu Festival (端午節), is on the fifth day of the fifth month and is an official holiday in China, Hong Kong, Macau, and Taiwan. It is also celebrated in Vietnam where it is known as Tết Đoan Dương (節端陽) The Qixi Festival (七夕節) is celebrated in the evening of the seventh day of the seventh month. It is also celebrated in Vietnam where it is known as Tết Ngâu. The Double Ninth Festival (重陽節) is celebrated on the ninth day of the ninth month. It is also celebrated in Vietnam where it is known as Tết Trùng Cửu (節重九). Full moon holidays (holidays on the fifteenth day) The Lantern Festival is celebrated on the fifteenth day of the first month and was traditionally called the Yuan Xiao (元宵) or Shang Yuan Festival (上元節). In Vietnam, it is known as Rằm tháng giêng. The Zhong Yuan Festival is celebrated on the fifteenth day of the seventh month. In Vietnam, it is celebrated as Lễ Vu Lan (禮盂蘭). The Mid-Autumn Festival is celebrated on the fifteenth day of the eighth month. In Vietnam, it is celebrated as Tết Trung Thu (節中秋). The Xia Yuan Festival is celebrated on the fifteenth day of the tenth month. In Vietnam, it is celebrated as Lễ mừng lúa mới. Celebrations of the twelfth month The Laba Festival is on the eighth day of the twelfth month. It is the enlightenment day of Sakyamuni Buddha and in Vietnam is known as Lễ Vía Phật Thích Ca thành đạo. The Kitchen God Festival is celebrated on the twenty-third day of the twelfth month in northern regions of China and on the twenty-fourth day of the twelfth month in southern regions of China. Chinese New Year's Eve is also known as the Chuxi Festival and is celebrated on the evening of the last day of the lunar calendar. It is celebrated wherever the lunar calendar is observed. Celebrations of solar-term holidays The Qingming Festival (清明节) is celebrated on the fifteenth day after the Spring Equinox. The Dongzhi Festival (冬至) or the Winter Solstice is celebrated. Religious holidays based on the lunar calendar East Asian Mahayana, Daoist, and some Cao Dai holidays and/or vegetarian observances are based on the Lunar Calendar. Celebrations in Japan Many of the above holidays of the lunar calendar are also celebrated in Japan, but since the Meiji era on the similarly numbered dates of the Gregorian calendar. Double celebrations due to intercalary months In the case when there is a corresponding intercalary month, the holidays may be celebrated twice. For example, in the hypothetical situation in which there is an additional intercalary seventh month, the Zhong Yuan Festival will be celebrated in the seventh month followed by another celebration in the intercalary seventh month. (The next such occasion will be 2033, the first such since the calendar reform of 1645. Similar calendars Like Chinese characters, variants of the Chinese calendar have been used in different parts of the Sinosphere throughout history: this includes Vietnam, Korea, Singapore, Japan and Ryukyu, Mongolia, and elsewhere. Outlying areas of China Calendars of ethnic groups in mountains and plateaus of southwestern China and grasslands of northern China are based on their phenology and algorithms of traditional calendars of different periods, particularly the Tang and pre-Qin dynasties. Non-Chinese areas Korea, Vietnam, and the Ryukyu Islands adopted the Chinese calendar. In the respective regions, the Chinese calendar has been adapted into the Korean, Vietnamese, and Ryukyuan calendars, with the main difference from the Chinese calendar being the use of different meridians due to geography, leading to some astronomical events — and calendar events based on them — falling on different dates. The traditional Japanese calendar was also derived from the Chinese calendar (based on a Japanese meridian), but Japan abolished its official use in 1873 after Meiji Restoration reforms. Calendars in Mongolia and Tibet have absorbed elements of the traditional Chinese calendar but are not direct descendants of it.
Technology
Timekeeping
null
6972
https://en.wikipedia.org/wiki/Chipmunk
Chipmunk
Chipmunks are small, striped rodents of subtribe Tamiina. Chipmunks are found in North America, with the exception of the Siberian chipmunk which is found primarily in Asia. Taxonomy and systematics Chipmunks are classified as four genera: Tamias, of which the eastern chipmunk (T. striatus) is the only living member; Eutamias, of which the Siberian chipmunk (E. sibiricus) is the only living member; Nototamias, which consists of three extinct species, and Neotamias, which includes the 23 remaining, mostly western North American, species. These classifications were treated as subgenera due to the chipmunks' morphological similarities. As a result, most taxonomies over the twentieth century have placed the chipmunks into a single genus. Joseph C. Moore reclassified chipmunks to form a subtribe Tamiina in a 1959 study, and this classification has been supported by studies of mitochondrial DNA. The common name originally may have been spelled "chitmunk", from the native Odawa (Ottawa) word jidmoonh, meaning "red squirrel" (cf. Ojibwe ajidamoo). The earliest form cited in the Oxford English Dictionary is "chipmonk", from 1842. Other early forms include "chipmuck" and "chipminck", and in the 1830s they were also referred to as "chip squirrels", probably in reference to the sound they make. In the mid-19th century, John James Audubon and his sons included a lithograph of the chipmunk in their Viviparous Quadrupeds of North America, calling it the "chipping squirrel [or] hackee". Chipmunks have also been referred to as "ground squirrels" (although the name "ground squirrel" may refer to other squirrels, such as those of the genus Spermophilus). Diet Chipmunks have an omnivorous diet primarily consisting of seeds, nuts and other fruits, and buds. They also commonly eat grass, shoots, and many other forms of plant matter, as well as fungi, insects and other arthropods, small frogs, worms, and bird eggs. They will also occasionally eat newly hatched baby birds. Around humans, chipmunks can eat cultivated grains and vegetables, and other plants from farms and gardens, so they are sometimes considered pests. Chipmunks mostly forage on the ground, but they climb trees to obtain nuts such as hazelnuts and acorns. At the beginning of autumn, many species of chipmunk begin to stockpile nonperishable foods for winter. They mostly cache their foods in a larder in their burrows and remain in their nests until spring, unlike some other species which make multiple small caches of food. Cheek pouches allow chipmunks to carry food items to their burrows for either storage or consumption. Ecology and life history Eastern chipmunks, the largest of the chipmunks, mate in early spring and again in early summer, producing litters of four or five young twice each year. Western chipmunks breed only once a year. The young emerge from the burrow after about six weeks and strike out on their own within the next two weeks. These small mammals fulfill several important functions in forest ecosystems. Their activities harvesting and hoarding tree seeds play a crucial role in seedling establishment. They consume many different kinds of fungi, including those involved in symbiotic mycorrhizal associations with trees, and are a vector for dispersal of the spores of subterranean sporocarps (truffles) in some regions. Chipmunks construct extensive burrows which can be more than in length with several well-concealed entrances. The sleeping quarters are kept clear of shells, and feces are stored in refuse tunnels. The eastern chipmunk hibernates in the winter, while western chipmunks do not, relying on the stores in their burrows. Chipmunks play an important role as prey for various predatory mammals and birds but are also opportunistic predators themselves, particularly with regard to bird eggs and nestlings, as in the case of eastern chipmunks and mountain bluebirds (Siala currucoides). Chipmunks typically live about three years, although some have been observed living to nine years in captivity. Chipmunks are diurnal. In captivity, they are said to sleep for an average of about 15 hours a day. It is thought that mammals which can sleep in hiding, such as rodents and bats, tend to sleep longer than those that must remain on alert. Genera Genus Eutamias Siberian chipmunk, Eutamias sibiricus Genus Tamias Eastern chipmunk, Tamias striatus Tamias aristus † Genus Neotamias Allen's chipmunk, Neotamias senex Alpine chipmunk, Neotamias alpinus Buller's chipmunk, Neotamias bulleri California chipmunk, Neotamias obscurus Cliff chipmunk, Neotamias dorsalis Colorado chipmunk, Neotamias quadrivittatus Durango chipmunk, Neotamias durangae Gray-collared chipmunk, Neotamias cinereicollis Gray-footed chipmunk, Neotamias canipes Hopi chipmunk, Neotamias rufus Least chipmunk, Neotamias minimus Lodgepole chipmunk, Neotamias speciosus Long-eared chipmunk, Neotamias quadrimaculatus Merriam's chipmunk, Neotamias merriami Palmer's chipmunk, Neotamias palmeri Panamint chipmunk, Neotamias panamintinus Red-tailed chipmunk, Neotamias ruficaudus Siskiyou chipmunk, Neotamias siskiyou Sonoma chipmunk, Neotamias sonomae Townsend's chipmunk, Neotamias townsendii Uinta chipmunk, Neotamias umbrinus Yellow-cheeked chipmunk, Neotamias ochrogenys Yellow-pine chipmunk, Neotamias amoenus Genus Nototamias † Nototamias ateles † Nototamias hulberti † Nototamias quadratus † In popular culture Alvin and the Chipmunks, an animated virtual band Chip 'n' Dale, cartoon Disney chipmunks
Biology and health sciences
Rodents
Animals
6985
https://en.wikipedia.org/wiki/Chlorophyll
Chlorophyll
Chlorophyll is any of several related green pigments found in cyanobacteria and in the chloroplasts of algae and plants. Its name is derived from the Greek words (, "pale green") and (, "leaf"). Chlorophyll allows plants to absorb energy from light. Those pigments are involved in oxygenic photosynthesis, as opposed to bacteriochlorophylls, related molecules found only in bacteria and involved in anoxygenic photosynthesis. Chlorophylls absorb light most strongly in the blue portion of the electromagnetic spectrum as well as the red portion. Conversely, it is a poor absorber of green and near-green portions of the spectrum. Hence chlorophyll-containing tissues appear green because green light, diffusively reflected by structures like cell walls, is less absorbed. Two types of chlorophyll exist in the photosystems of green plants: chlorophyll a and b. History Chlorophyll was first isolated and named by Joseph Bienaimé Caventou and Pierre Joseph Pelletier in 1817. The presence of magnesium in chlorophyll was discovered in 1906, and was the first detection of that element in living tissue. After initial work done by German chemist Richard Willstätter spanning from 1905 to 1915, the general structure of chlorophyll a was elucidated by Hans Fischer in 1940. By 1960, when most of the stereochemistry of chlorophyll a was known, Robert Burns Woodward published a total synthesis of the molecule. In 1967, the last remaining stereochemical elucidation was completed by Ian Fleming, and in 1990 Woodward and co-authors published an updated synthesis. Chlorophyll f was announced to be present in cyanobacteria and other oxygenic microorganisms that form stromatolites in 2010; a molecular formula of C55H70O6N4Mg and a structure of (2-formyl)-chlorophyll a were deduced based on NMR, optical and mass spectra. Photosynthesis Chlorophyll is vital for photosynthesis, which allows plants to absorb energy from light. Chlorophyll molecules are arranged in and around photosystems that are embedded in the thylakoid membranes of chloroplasts. In these complexes, chlorophyll serves three functions: The function of the vast majority of chlorophyll (up to several hundred molecules per photosystem) is to absorb light. Having done so, these same centers execute their second function: The transfer of that energy by resonance energy transfer to a specific chlorophyll pair in the reaction center of the photosystems. This specific pair performs the final function of chlorophylls: Charge separation, which produces the unbound protons (H) and electrons (e) that separately propel biosynthesis. The two currently accepted photosystem units are and which have their own distinct reaction centres, named P700 and P680, respectively. These centres are named after the wavelength (in nanometers) of their red-peak absorption maximum. The identity, function and spectral properties of the types of chlorophyll in each photosystem are distinct and determined by each other and the protein structure surrounding them. The function of the reaction center of chlorophyll is to absorb light energy and transfer it to other parts of the photosystem. The absorbed energy of the photon is transferred to an electron in a process called charge separation. The removal of the electron from the chlorophyll is an oxidation reaction. The chlorophyll donates the high energy electron to a series of molecular intermediates called an electron transport chain. The charged reaction center of chlorophyll (P680+) is then reduced back to its ground state by accepting an electron stripped from water. The electron that reduces P680+ ultimately comes from the oxidation of water into O2 and H+ through several intermediates. This reaction is how photosynthetic organisms such as plants produce O2 gas, and is the source for practically all the O2 in Earth's atmosphere. Photosystem I typically works in series with Photosystem II; thus the P700+ of Photosystem I is usually reduced as it accepts the electron, via many intermediates in the thylakoid membrane, by electrons coming, ultimately, from Photosystem II. Electron transfer reactions in the thylakoid membranes are complex, however, and the source of electrons used to reduce P700+ can vary. The electron flow produced by the reaction center chlorophyll pigments is used to pump H+ ions across the thylakoid membrane, setting up a proton-motive force a chemiosmotic potential used mainly in the production of ATP (stored chemical energy) or to reduce NADP+ to NADPH. NADPH is a universal agent used to reduce CO2 into sugars as well as other biosynthetic reactions. Reaction center chlorophyll–protein complexes are capable of directly absorbing light and performing charge separation events without the assistance of other chlorophyll pigments, but the probability of that happening under a given light intensity is small. Thus, the other chlorophylls in the photosystem and antenna pigment proteins all cooperatively absorb and funnel light energy to the reaction center. Besides chlorophyll a, there are other pigments, called accessory pigments, which occur in these pigment–protein antenna complexes. Chemical structure Several chlorophylls are known. All are defined as derivatives of the parent chlorin by the presence of a fifth, ketone-containing ring beyond the four pyrrole-like rings. Most chlorophylls are classified as chlorins, which are reduced relatives of porphyrins (found in hemoglobin). They share a common biosynthetic pathway with porphyrins, including the precursor uroporphyrinogen III. Unlike hemes, which contain iron bound to the N4 center, most chlorophylls bind magnesium. The axial ligands attached to the Mg2+ center are often omitted for clarity. Appended to the chlorin ring are various side chains, usually including a long phytyl chain (). The most widely distributed form in terrestrial plants is chlorophyll a. Chlorophyll a has methyl group in place of a formyl group in chlorophyll b. This difference affects the absorption spectrum, allowing plants to absorb a greater portion of visible light. The structures of chlorophylls are summarized below: Chlorophyll e is reserved for a pigment that has been extracted from algae in 1966 but not chemically described. Besides the lettered chlorophylls, a wide variety of sidechain modifications to the chlorophyll structures are known in the wild. For example, Prochlorococcus, a cyanobacterium, uses 8-vinyl Chl a and b. Measurement of chlorophyll content Chlorophylls can be extracted from the protein into organic solvents. In this way, the concentration of chlorophyll within a leaf can be estimated. Methods also exist to separate chlorophyll a and chlorophyll b. In diethyl ether, chlorophyll a has approximate absorbance maxima of 430 nm and 662 nm, while chlorophyll b has approximate maxima of 453 nm and 642 nm. The absorption peaks of chlorophyll a are at 465 nm and 665 nm. Chlorophyll a fluoresces at 673 nm (maximum) and 726 nm. The peak molar absorption coefficient of chlorophyll a exceeds 105 M−1 cm−1, which is among the highest for small-molecule organic compounds. In 90% acetone-water, the peak absorption wavelengths of chlorophyll a are 430 nm and 664 nm; peaks for chlorophyll b are 460 nm and 647 nm; peaks for chlorophyll c1 are 442 nm and 630 nm; peaks for chlorophyll c2 are 444 nm and 630 nm; peaks for chlorophyll d are 401 nm, 455 nm and 696 nm. Ratio fluorescence emission can be used to measure chlorophyll content. By exciting chlorophyll a fluorescence at a lower wavelength, the ratio of chlorophyll fluorescence emission at and can provide a linear relationship of chlorophyll content when compared with chemical testing. The ratio F735/F700 provided a correlation value of r2 0.96 compared with chemical testing in the range from 41 mg m−2 up to 675 mg m−2. Gitelson also developed a formula for direct readout of chlorophyll content in mg m−2. The formula provided a reliable method of measuring chlorophyll content from 41 mg m−2 up to 675 mg m−2 with a correlation r2 value of 0.95. Also, the chlorophyll concentration can be estimated by measuring the light transmittance through the plant leaves . The assessment of leaf chlorophyll content using optical sensors such as Dualex and SPAD allows researchers to perform real-time and non-destructive measurements . Research shows that these methods have a positive correlation with laboratory measurements of chlorophyll. Biosynthesis In some plants, chlorophyll is derived from glutamate and is synthesised along a branched biosynthetic pathway that is shared with heme and siroheme. Chlorophyll synthase is the enzyme that completes the biosynthesis of chlorophyll a: chlorophyllide a + phytyl diphosphate chlorophyll a + diphosphate This conversion forms an ester of the carboxylic acid group in chlorophyllide a with the 20-carbon diterpene alcohol phytol. Chlorophyll b is made by the same enzyme acting on chlorophyllide b. The same is known for chlorophyll d and f, both made from corresponding chlorophyllides ultimately made from chlorophyllide a. In Angiosperm plants, the later steps in the biosynthetic pathway are light-dependent. Such plants are pale (etiolated) if grown in darkness. Non-vascular plants and green algae have an additional light-independent enzyme and grow green even in darkness. Chlorophyll is bound to proteins. Protochlorophyllide, one of the biosynthetic intermediates, occurs mostly in the free form and, under light conditions, acts as a photosensitizer, forming free radicals, which can be toxic to the plant. Hence, plants regulate the amount of this chlorophyll precursor. In angiosperms, this regulation is achieved at the step of aminolevulinic acid (ALA), one of the intermediate compounds in the biosynthesis pathway. Plants that are fed by ALA accumulate high and toxic levels of protochlorophyllide; so do the mutants with a damaged regulatory system. Senescence and the chlorophyll cycle The process of plant senescence involves the degradation of chlorophyll: for example the enzyme chlorophyllase () hydrolyses the phytyl sidechain to reverse the reaction in which chlorophylls are biosynthesised from chlorophyllide a or b. Since chlorophyllide a can be converted to chlorophyllide b and the latter can be re-esterified to chlorophyll b, these processes allow cycling between chlorophylls a and b. Moreover, chlorophyll b can be directly reduced (via ) back to chlorophyll a, completing the cycle. In later stages of senescence, chlorophyllides are converted to a group of colourless tetrapyrroles known as nonfluorescent chlorophyll catabolites (NCC's) with the general structure: These compounds have also been identified in ripening fruits and they give characteristic autumn colours to deciduous plants. Distribution Chlorophyll maps from 2002 to 2024, provided by NASA, show milligrams of chlorophyll per cubic meter of seawater each month. Places where chlorophyll amounts are very low, indicating very low numbers of phytoplankton, are blue. Places where chlorophyll concentrations are high, meaning many phytoplankton were growing, are yellow. The observations come from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Aqua satellite. Land is dark gray, and places where MODIS could not collect data because of sea ice, polar darkness, or clouds are light gray. The highest chlorophyll concentrations, where tiny surface-dwelling ocean plants are, are in cold polar waters or in places where ocean currents bring cold water to the surface, such as around the equator and along the shores of continents. It is not the cold water itself that stimulates the phytoplankton. Instead, the cool temperatures are often a sign that the water has welled up to the surface from deeper in the ocean, carrying nutrients that have built up over time. In polar waters, nutrients accumulate in surface waters during the dark winter months when plants cannot grow. When sunlight returns in the spring and summer, the plants flourish in high concentrations. Uses Culinary Synthetic chlorophyll is registered as a food additive colorant, and its E number is E140. Chefs use chlorophyll to color a variety of foods and beverages green, such as pasta and spirits. Absinthe gains its green color naturally from the chlorophyll introduced through the large variety of herbs used in its production. Chlorophyll is not soluble in water, and it is first mixed with a small quantity of vegetable oil to obtain the desired solution. In marketing In years 1950–1953 in particular, chlorophyll was used as a marketing tool to promote toothpaste, sanitary towels, soap and other products. This was based on claims that it was an odor blocker — a finding from research by F. Howard Westcott in the 1940s — and the commercial value of this attribute in advertising led to many companies creating brands containing the compound. However, it was soon determined that the hype surrounding chlorophyll was not warranted and the underlying research may even have been a hoax. As a result, brands rapidly discontinued its use. In the 2020s, chlorophyll again became the subject of unsubstantiated medical claims, as social media influencers promoted its use in the form of "chlorophyll water", for example.
Biology and health sciences
Biochemistry and molecular biology
null
7011
https://en.wikipedia.org/wiki/Control%20engineering
Control engineering
Control engineering, also known as control systems engineering and, in some European countries, automation engineering, is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering, chemical engineering and mechanical engineering at many institutions around the world. The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems. Overview Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem. Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering. Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved. Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors. History Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788. In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis. Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes. Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today. Mathematical modelling David Quinn Mayne, (1930–2024) was among the early developers of a rigorous mathematical method for analysing Model predictive control algorithms (MPC). It is currently used in tens of thousands of applications and is a core part of the advanced control technology by hundreds of process control producers. MPC's major strength is its capacity to deal with nonlinearities and hard constraints in a simple and intuitive fashion. His work underpins a class of algorithms that are provably correct, heuristically explainable, and yield control system designs which meet practically important objectives. Control systems Control theory Education At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University. Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education. Careers A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are typically paired with an electrical or mechanical engineering degree, but can also be paired with a degree in chemical engineering. According to a Control Engineering survey, most of the people who answered were control engineers in various forms of their own career. There are not very many careers that are classified as "control engineer", most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering. Because of this, there are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, chemical companies, petroleum companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, Phillips 66, Eastman, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. Process Control Engineers, typically found in Refineries and Specialty Chemical plants, can earn upwards of $90k annually. Recent advancement Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock. The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components. Therefore, at the design stage either digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or analog components are mapped into discrete domain and design is carried out there. The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers. Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme. Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc.
Technology
Disciplines
null
7012
https://en.wikipedia.org/wiki/Chagas%20disease
Chagas disease
Chagas disease, also known as American trypanosomiasis, is a tropical parasitic disease caused by Trypanosoma cruzi. It is spread mostly by insects in the subfamily Triatominae, known as "kissing bugs". The symptoms change throughout the infection. In the early stage, symptoms are typically either not present or mild and may include fever, swollen lymph nodes, headaches, or swelling at the site of the bite. After four to eight weeks, untreated individuals enter the chronic phase of disease, which in most cases does not result in further symptoms. Up to 45% of people with chronic infections develop heart disease 10–30 years after the initial illness, which can lead to heart failure. Digestive complications, including an enlarged esophagus or an enlarged colon, may also occur in up to 21% of people, and up to 10% of people may experience nerve damage. is commonly spread to humans and other mammals by the kissing bug's bite wound and the bug's infected feces. The disease may also be spread through blood transfusion, organ transplantation, consuming food or drink contaminated with the parasites, and vertical transmission (from a mother to her baby). Diagnosis of early disease is by finding the parasite in the blood using a microscope or detecting its DNA by polymerase chain reaction. Chronic disease is diagnosed by finding antibodies for in the blood. Prevention focuses on eliminating kissing bugs and avoiding their bites. This may involve the use of insecticides or bed-nets. Other preventive efforts include screening blood used for transfusions. Early infections are treatable with the medications benznidazole or nifurtimox, which usually cure the disease if given shortly after the person is infected, but become less effective the longer a person has had Chagas disease. When used in chronic disease, medication may delay or prevent the development of end-stage symptoms. Benznidazole and nifurtimox often cause side effects, including skin disorders, digestive system irritation, and neurological symptoms, which can result in treatment being discontinued. New drugs for Chagas disease are under development, and while experimental vaccines have been studied in animal models, a human vaccine has not been developed. It is estimated that 6.5 million people, mostly in Mexico, Central America and South America, have Chagas disease as of 2019, resulting in approximately 9,490 annual deaths. Most people with the disease are poor, and most do not realize they are infected. Large-scale population migrations have carried Chagas disease to new regions, which include the United States and many European countries. The disease affects more than 150 types of animals. The disease was first described in 1909 by Brazilian physician Carlos Chagas, after whom it is named. Chagas disease is classified as a neglected tropical disease. Signs and symptoms Chagas disease occurs in two stages: an acute stage, which develops one to two weeks after the insect bite, and a chronic stage, which develops over many years. The acute stage is often symptom-free. When present, the symptoms are typically minor and not specific to any particular disease. Signs and symptoms include fever, malaise, headache, and enlargement of the liver, spleen, and lymph nodes. Sometimes, people develop a swollen nodule at the site of infection, which is called "Romaña's sign" if it is on the eyelid, or a "chagoma" if it is elsewhere on the skin. In rare cases (less than 1–5%), infected individuals develop severe acute disease, which can involve inflammation of the heart muscle, fluid accumulation around the heart, and inflammation of the brain and surrounding tissues, and may be life-threatening. The acute phase typically lasts four to eight weeks and resolves without treatment. Unless treated with antiparasitic drugs, individuals remain infected with after recovering from the acute phase. Most chronic infections are asymptomatic, which is referred to as indeterminate chronic Chagas disease. However, over decades with the disease, approximately 30–40% of people develop organ dysfunction (determinate chronic Chagas disease), which most often affects the heart or digestive system. The most common long-term manifestation is heart disease, which occurs in 14–45% of people with chronic Chagas disease. People with Chagas heart disease often experience heart palpitations, and sometimes fainting, due to irregular heart function. By electrocardiogram, people with Chagas heart disease most frequently have arrhythmias. As the disease progresses, the heart's ventricles become enlarged (dilated cardiomyopathy), which reduces its ability to pump blood. In many cases the first sign of Chagas heart disease is heart failure, thromboembolism, or chest pain associated with abnormalities in the microvasculature. Also common in chronic Chagas disease is damage to the digestive system, which affects 10–21% of people. Enlargement of the esophagus or colon are the most common digestive issues. Those with enlarged esophagus often experience pain (odynophagia) or trouble swallowing (dysphagia), acid reflux, cough, and weight loss. Individuals with enlarged colon often experience constipation, and may develop severe blockage of the intestine or its blood supply. Up to 10% of chronically infected individuals develop nerve damage that can result in numbness and altered reflexes or movement. While chronic disease typically develops over decades, some individuals with Chagas disease (less than 10%) progress to heart damage directly after acute disease. Signs and symptoms differ for people infected with through less common routes. People infected through ingestion of parasites tend to develop severe disease within three weeks of consumption, with symptoms including fever, vomiting, shortness of breath, cough, and pain in the chest, abdomen, and muscles. Those infected congenitally typically have few to no symptoms, but can have mild non-specific symptoms, or severe symptoms such as jaundice, respiratory distress, and heart problems. People infected through organ transplant or blood transfusion tend to have symptoms similar to those of vector-borne disease, but the symptoms may not manifest for anywhere from a week to five months. Chronically infected individuals who become immunosuppressed due to HIV infection can have particularly severe and distinct disease, most commonly characterized by inflammation in the brain and surrounding tissue or brain abscesses. Symptoms vary widely based on the size and location of brain abscesses, but typically include fever, headaches, seizures, loss of sensation, or other neurological issues that indicate particular sites of nervous system damage. Occasionally, these individuals also experience acute heart inflammation, skin lesions, and disease of the stomach, intestine, or peritoneum. Cause Chagas disease is caused by infection with the protozoan parasite , which is typically introduced into humans through the bite of triatomine bugs, also called "kissing bugs". When the insect defecates at the bite site, motile forms called trypomastigotes enter the bloodstream and invade various host cells. Inside a host cell, the parasite transforms into a replicative form called an amastigote, which undergoes several rounds of replication. The replicated amastigotes transform back into trypomastigotes, which burst the host cell and are released into the bloodstream. Trypomastigotes then disseminate throughout the body to various tissues, where they invade cells and replicate. Over many years, cycles of parasite replication and immune response can severely damage these tissues, particularly the heart and digestive tract. Transmission T. cruzi can be transmitted by various triatomine bugs in the genera Triatoma, Panstrongylus, and Rhodnius. The primary vectors for human infection are the species of triatomine bugs that inhabit human dwellings, namely Triatoma infestans, Rhodnius prolixus, Triatoma dimidiata and Panstrongylus megistus. These insects are known by a number of local names, including vinchuca in Argentina, Bolivia, Chile and Paraguay, barbeiro (the barber) in Brazil, pito in Colombia, chinche in Central America, and chipo in Venezuela. The bugs tend to feed at night, preferring moist surfaces near the eyes or mouth. A triatomine bug can become infected with when it feeds on an infected host. replicates in the insect's intestinal tract and is shed in the bug's feces. When an infected triatomine feeds, it pierces the skin and takes in a blood meal, defecating at the same time to make room for the new meal. The bite is typically painless, but causes itching. Scratching at the bite introduces the -laden feces into the bite wound, initiating infection. In addition to classical vector spread, Chagas disease can be transmitted through the consumption of food or drink contaminated with triatomine insects or their feces. Since heating or drying kills the parasites, drinks and especially fruit juices are the most frequent source of infection. This oral route of transmission has been implicated in several outbreaks, where it led to unusually severe symptoms, likely due to infection with a higher parasite load than from the bite of a triatomine bug—a single crushed triatomine in a food or beverage harboring T cruzi can contain about 600,000 metacyclic trypomastigotes, while triatomine fecal matter contains 3,000-4,000 per μL. T. cruzi can be transmitted independent of the triatomine bug during blood transfusion, following organ transplantation, or across the placenta during pregnancy. Transfusion with the blood of an infected donor infects the recipient 10–25% of the time. To prevent this, blood donations are screened for in many countries with endemic Chagas disease, as well as the United States. Similarly, transplantation of solid organs from an infected donor can transmit to the recipient. This is especially true for heart transplant, which transmits T. cruzi 75–100% of the time, and less so for transplantation of the liver (0–29%) or a kidney (0–19%). An infected mother can pass to her child through the placenta; this occurs in up to 15% of births by infected mothers. As of 2019, 22.5% of new infections occurred through congenital transmission. Pathophysiology In the acute phase of the disease, signs and symptoms are caused directly by the replication of and the immune system's response to it. During this phase, can be found in various tissues throughout the body and circulating in the blood. During the initial weeks of infection, parasite replication is brought under control by the production of antibodies and activation of the host's inflammatory response, particularly cells that target intracellular pathogens such as NK cells and macrophages, driven by inflammation-signaling molecules like TNF-α and IFN-γ. During chronic Chagas disease, long-term organ damage develops over years due to continued replication of the parasite and damage from the immune system. Early in the course of the disease, is found frequently in the striated muscle fibers of the heart. As disease progresses, the heart becomes generally enlarged, with substantial regions of cardiac muscle fiber replaced by scar tissue and fat. Areas of active inflammation are scattered throughout the heart, with each housing inflammatory immune cells, typically macrophages and T cells. Late in the disease, parasites are rarely detected in the heart, and may be present at only very low levels. In the heart, colon, and esophagus, chronic disease leads to a massive loss of nerve endings. In the heart, this may contribute to arrhythmias and other cardiac dysfunction. In the colon and esophagus, loss of nervous system control is the major driver of organ dysfunction. Loss of nerves impairs the movement of food through the digestive tract, which can lead to blockage of the esophagus or colon and restriction of their blood supply. The parasite can insert kinetoplast DNA into host cells, an example of horizontal gene transfer. Vertical inheritance of the inserted kDNA has been demonstrated in rabbits and birds. In chickens, offspring carrying inserted kDNA show symptoms of disease despite carrying no live trypanosomes. In 2010, integrated kDNA was found to be vertically transmitted in five human families. Diagnosis The presence of T. cruzi in the blood is diagnostic of Chagas disease. During the acute phase of infection, it can be detected by microscopic examination of fresh anticoagulated blood, or its buffy coat, for motile parasites; or by preparation of thin and thick blood smears stained with Giemsa, for direct visualization of parasites. Blood smear examination detects parasites in 34–85% of cases. The sensitivity increases if techniques such as microhematocrit centrifugation are used to concentrate the blood. On microscopic examination of stained blood smears, trypomastigotes appear as S or U-shaped organisms with a flagellum connected to the body by an undulating membrane. A nucleus and a smaller structure called a kinetoplast are visible inside the parasite's body; the kinetoplast of is relatively large, which helps to distinguish it from other species of trypanosomes that infect humans. Alternatively, T. cruzi DNA can be detected by polymerase chain reaction (PCR). In acute and congenital Chagas disease, PCR is more sensitive than microscopy, and it is more reliable than antibody-based tests for the diagnosis of congenital disease because it is not affected by the transfer of antibodies against from a mother to her baby (passive immunity). PCR is also used to monitor levels in organ transplant recipients and immunosuppressed people, which allows infection or reactivation to be detected at an early stage. In chronic Chagas disease, the concentration of parasites in the blood is too low to be reliably detected by microscopy or PCR, so the diagnosis is usually made using serological tests, which detect immunoglobulin G antibodies against in the blood. Two positive serology results, using different test methods, are required to confirm the diagnosis. If the test results are inconclusive, additional testing methods such as Western blot can be used. Various rapid diagnostic tests for Chagas disease are available. These tests are easily transported and can be performed by people without special training. They are useful for screening large numbers of people and testing people who cannot access healthcare facilities, but their sensitivity is relatively low, and it is recommended that a second method is used to confirm a positive result. T. cruzi parasites can be grown from blood samples by blood culture, xenodiagnosis, or by inoculating animals with the person's blood. In the blood culture method, the person's red blood cells are separated from the plasma and added to a specialized growth medium to encourage multiplication of the parasite. It can take up to six months to obtain the result. Xenodiagnosis involves feeding the blood to triatomine insects, and then examining their feces for the parasite 30 to 60 days later. These methods are not routinely used, as they are slow and have low sensitivity. Prevention Efforts to prevent Chagas disease have largely focused on vector control to limit exposure to triatomine bugs. Insecticide-spraying programs have been the mainstay of vector control, consisting of spraying homes and the surrounding areas with residual insecticides. This was originally done with organochlorine, organophosphate, and carbamate insecticides, which were supplanted in the 1980s with pyrethroids. These programs have drastically reduced transmission in Brazil and Chile, and eliminated major vectors from certain regions: Triatoma infestans from Brazil, Chile, Uruguay, and parts of Peru and Paraguay, as well as Rhodnius prolixus from Central America. Vector control in some regions has been hindered by the development of insecticide resistance among triatomine bugs. In response, vector control programs have implemented alternative insecticides (e.g. fenitrothion and bendiocarb in Argentina and Bolivia), treatment of domesticated animals (which are also fed on by triatomine bugs) with pesticides, pesticide-impregnated paints, and other experimental approaches. In areas with triatomine bugs, transmission of can be prevented by sleeping under bed nets and by housing improvements that prevent triatomine bugs from colonizing houses. Blood transfusion was formerly the second-most common mode of transmission for Chagas disease. can survive in refrigerated stored blood, and can survive freezing and thawing, allowing it to persist in whole blood, packed red blood cells, granulocytes, cryoprecipitate, and platelets. The development and implementation of blood bank screening tests have dramatically reduced the risk of infection during a blood transfusion. Nearly all blood donations in Latin American countries undergo Chagas screening. Widespread screening is also common in non-endemic nations with significant populations of immigrants from endemic areas, including the United Kingdom (implemented in 1999), Spain (2005), the United States (2007), France and Sweden (2009), Switzerland (2012), and Belgium (2013). Serological tests, typically ELISAs, are used to detect antibodies against proteins in donor blood. Other modes of transmission have been targeted by Chagas disease prevention programs. Treating -infected mothers during pregnancy reduces the risk of congenital transmission of the infection. To this end, many countries in Latin America have implemented routine screening of pregnant women and infants for infection, and the World Health Organization recommends screening all children born to infected mothers to prevent congenital infection from developing into chronic disease. Similarly to blood transfusions, many countries with endemic Chagas disease screen organs for transplantation with serological tests. There is no vaccine against Chagas disease. Several experimental vaccines have been tested in animals infected with and were able to reduce parasite numbers in the blood and heart, but no vaccine candidates had undergone clinical trials in humans as of 2016. Management Chagas disease is managed using antiparasitic drugs to eliminate T. cruzi from the body, and symptomatic treatment to address the effects of the infection. As of 2018, benznidazole and nifurtimox were the antiparasitic drugs of choice for treating Chagas disease, though benznidazole is the only drug available in most of Latin America. For either drug, treatment typically consists of two to three oral doses per day for 60 to 90 days. Antiparasitic treatment is most effective early in the course of infection: it eliminates from 50 to 80% of people in the acute phase (WHO: "nearly 100 %"), but only 20–60% of those in the chronic phase. Treatment of chronic disease is more effective in children than in adults, and the cure rate for congenital disease approaches 100% if treated in the first year of life. Antiparasitic treatment can also slow the progression of the disease and reduce the possibility of congenital transmission. Elimination of does not cure the cardiac and gastrointestinal damage caused by chronic Chagas disease, so these conditions must be treated separately. Antiparasitic treatment is not recommended for people who have already developed dilated cardiomyopathy. Benznidazole is usually considered the first-line treatment because it has milder adverse effects than nifurtimox, and its efficacy is better understood. Both benznidazole and nifurtimox have common side effects that can result in treatment being discontinued. The most common side effects of benznidazole are skin rash, digestive problems, decreased appetite, weakness, headache, and sleeping problems. These side effects can sometimes be treated with antihistamines or corticosteroids, and are generally reversed when treatment is stopped. However, benznidazole is discontinued in up to 29% of cases. Nifurtimox has more frequent side effects, affecting up to 97.5% of individuals taking the drug. The most common side effects are loss of appetite, weight loss, nausea and vomiting, and various neurological disorders including mood changes, insomnia, paresthesia and peripheral neuropathy. Treatment is discontinued in up to 75% of cases. Both drugs are contraindicated for use in pregnant women and people with liver or kidney failure. As of 2019, resistance to these drugs has been reported. Complications In the chronic stage, treatment involves managing the clinical manifestations of the disease. The treatment of Chagas cardiomyopathy is similar to that of other forms of heart disease. Beta blockers and ACE inhibitors may be prescribed, but some people with Chagas disease may not be able to take the standard dose of these drugs because they have low blood pressure or a low heart rate. To manage irregular heartbeats, people may be prescribed anti-arrhythmic drugs such as amiodarone, or have a pacemaker implanted. Blood thinners may be used to prevent thromboembolism and stroke. Chronic heart disease caused by untreated T. cruzi infection is a common reason for heart transplantation surgery. Because transplant recipients take immunosuppressive drugs to prevent organ rejection, they are monitored using PCR to detect reactivation of the disease. People with Chagas disease who undergo heart transplantation have higher survival rates than the average heart transplant recipient. Mild gastrointestinal disease may be treated symptomatically, such as by using laxatives for constipation or taking a prokinetic drug like metoclopramide before meals to relieve esophageal symptoms. Surgery to sever the muscles of the lower esophageal sphincter (cardiomyotomy) may be performed in more severe cases of esophageal disease, and surgical removal of the affected part of the organ may be required for advanced megacolon and megaesophagus. Epidemiology In 2019, an estimated 6.5 million people worldwide had Chagas disease, with approximately 173,000 new infections and 9,490 deaths each year. The disease resulted in a global annual economic burden estimated at US$7.2 billion in 2013, 86% of which is borne by endemic countries. Chagas disease results in the loss of over 800,000 disability-adjusted life years each year. The endemic area of Chagas disease stretches from the southern United States to northern Chile and Argentina, with Bolivia (6.1%), Argentina (3.6%), and Paraguay (2.1%) exhibiting the highest prevalence of the disease. Within continental Latin America, Chagas disease is endemic to 21 countries: Argentina, Belize, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, El Salvador, French Guiana, Guatemala, Guyana, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Suriname, Uruguay, and Venezuela. In endemic areas, due largely to vector control efforts and screening of blood donations, annual infections and deaths have fallen by 67% and more than 73% respectively from their peaks in the 1980s to 2010. Transmission by insect vector and blood transfusion has been completely interrupted in Uruguay (1997), Chile (1999), and Brazil (2006), and in Argentina, vectorial transmission had been interrupted in 13 of the 19 endemic provinces as of 2001. During Venezuela's humanitarian crisis, vectorial transmission has begun occurring in areas where it had previously been interrupted, and Chagas disease seroprevalence rates have increased. Transmission rates have also risen in the Gran Chaco region due to insecticide resistance and in the Amazon basin due to oral transmission. While the rate of vector-transmitted Chagas disease has declined throughout most of Latin America, the rate of orally transmitted disease has risen, possibly due to increasing urbanization and deforestation bringing people into closer contact with triatomines and altering the distribution of triatomine species. Orally transmitted Chagas disease is of particular concern in Venezuela, where 16 outbreaks have been recorded between 2007 and 2018. Chagas exists in two different ecological zones. In the Southern Cone region, the main vector lives in and around human homes. In Central America and Mexico, the main vector species lives both inside dwellings and in uninhabited areas. In both zones, Chagas occurs almost exclusively in rural areas, where also circulates in wild and domestic animals. commonly infects more than 100 species of mammals across Latin America including opossums (Didelphis spp.), armadillos, marmosets, bats, various rodents and dogs all of which can be infected by the vectors or orally by eating triatomine bugs and other infected animals. For entomophagous animals this is a common mode. Didelphis spp. are unique in that they do not require the triatomine for transmission, completing the life cycle through their own urine and feces. Veterinary transmission also occurs through vertical transmission through the placenta, blood transfusion and organ transplants. Non-endemic countries Though Chagas is traditionally considered a disease of rural Latin America, international migration has dispersed those with the disease to numerous non-endemic countries, primarily in North America and Europe. As of 2020, approximately 300,000 infected people are living in the United States, and in 2018 it was estimated that 30,000 to 40,000 people in the United States had Chagas cardiomyopathy. The vast majority of cases in the United States occur in immigrants from Latin America, but local transmission is possible. Eleven triatomine species are native to the United States, and some southern states have persistent cycles of disease transmission between insect vectors and animal reservoirs, which include woodrats, possums, raccoons, armadillos and skunks. However, locally acquired infection is very rare: only 28 cases were documented from 1955 to 2015. As of 2013, the cost of treatment in the United States was estimated to be US$900 million annually (global cost $7 billion), which included hospitalization and medical devices such as pacemakers. Chagas disease affected approximately 68,000 to 123,000 people in Europe as of 2019. Spain, which has a high rate of immigration from Latin America, has the highest prevalence of the disease. It is estimated that 50,000 to 70,000 people in Spain are living with Chagas disease, accounting for the majority of European cases. The prevalence varies widely within European countries due to differing immigration patterns. Italy has the second highest prevalence, followed by the Netherlands, the United Kingdom, and Germany. History T. cruzi likely circulated in South American mammals long before the arrival of humans on the continent. has been detected in ancient human remains across South America, from a 9000-year-old Chinchorro mummy in the Atacama Desert, to remains of various ages in Minas Gerais, to an 1100-year-old mummy as far north as the Chihuahuan Desert near the Rio Grande. Many early written accounts describe symptoms consistent with Chagas disease, with early descriptions of the disease sometimes attributed to Miguel Diaz Pimenta (1707), (1735), and Theodoro J. H. Langgaard (1842). The formal description of Chagas disease was made by Carlos Chagas in 1909 after examining a two-year-old girl with fever, swollen lymph nodes, and an enlarged spleen and liver. Upon examination of her blood, Chagas saw trypanosomes identical to those he had recently identified from the hindgut of triatomine bugs and named Trypanosoma cruzi in honor of his mentor, Brazilian physician Oswaldo Cruz. He sent infected triatomine bugs to Cruz in Rio de Janeiro, who showed the bite of the infected triatomine could transmit to marmoset monkeys as well. In just two years, 1908 and 1909, Chagas published descriptions of the disease, the organism that caused it, and the insect vector required for infection. Almost immediately thereafter, at the suggestion of Miguel Couto, then professor of the , the disease was widely referred to as "Chagas disease". Chagas' discovery brought him national and international renown, but in highlighting the inadequacies of the Brazilian government's response to the disease, Chagas attracted criticism to himself and to the disease that bore his name, stifling research on his discovery and likely frustrating his nomination for the Nobel Prize in 1921. In the 1930s, Salvador Mazza rekindled Chagas disease research, describing over a thousand cases in Argentina's Chaco Province. In Argentina, the disease is known as mal de Chagas-Mazza in his honor. Serological tests for Chagas disease were introduced in the 1940s, demonstrating that infection with was widespread across Latin America. This, combined with successes eliminating the malaria vector through insecticide use, spurred the creation of public health campaigns focused on treating houses with insecticides to eradicate triatomine bugs. The 1950s saw the discovery that treating blood with crystal violet could eradicate the parasite, leading to its widespread use in transfusion screening programs in Latin America. Large-scale control programs began to take form in the 1960s, first in São Paulo, then various locations in Argentina, then national-level programs across Latin America. These programs received a major boost in the 1980s with the introduction of pyrethroid insecticides, which did not leave stains or odors after application and were longer-lasting and more cost-effective. Regional bodies dedicated to controlling Chagas disease arose through support of the Pan American Health Organization, with the Initiative of the Southern Cone for the Elimination of Chagas Diseases launching in 1991, followed by the Initiative of the Andean countries (1997), Initiative of the Central American countries (1997), and the Initiative of the Amazon countries (2004). Research Treatments Fexinidazole, an antiparasitic drug approved for treating African trypanosomiasis, has shown activity against Chagas disease in animal models. As of 2019, it is undergoing phase II clinical trials for chronic Chagas disease in Spain. Other drug candidates include GNF6702, a proteasome inhibitor that is effective against Chagas disease in mice and is undergoing preliminary toxicity studies, and AN4169, which has had promising results in animal models. Several experimental vaccines have been tested in animals. In addition to subunit vaccines, some approaches have involved vaccination with attenuated parasites or organisms that express some of the same antigens as but do not cause human disease, such as Trypanosoma rangeli or Phytomonas serpens. DNA vaccination has also been explored. As of 2019, vaccine research has mainly been limited to small animal models. Diagnostic tests As of 2018, standard diagnostic tests for Chagas disease were limited in their ability to measure the effectiveness of antiparasitic treatment, as serological tests may remain positive for years after is eliminated from the body, and PCR may give false-negative results when the parasite concentration in the blood is low. Several potential biomarkers of treatment response are under investigation, such as immunoassays against specific antigens, flow cytometry testing to detect antibodies against different life stages of , and markers of physiological changes caused by the parasite, such as alterations in coagulation and lipid metabolism. Another research area is the use of biomarkers to predict the progression of chronic disease. Serum levels of tumor necrosis factor alpha, brain and atrial natriuretic peptide, and angiotensin-converting enzyme 2 have been studied as indicators of the prognosis of Chagas cardiomyopathy. T. cruzi shed acute-phase antigen (SAPA), which can be detected in blood using ELISA or Western blot, has been used as an indicator of early acute and congenital infection. An assay for antigens in urine has been developed to diagnose congenital disease.
Biology and health sciences
Protozoan infections
Health
7025
https://en.wikipedia.org/wiki/Cranberry
Cranberry
Cranberries are a group of evergreen dwarf shrubs or trailing vines in the subgenus Oxycoccus of the genus Vaccinium. Cranberries are low, creeping shrubs or vines up to long and in height; they have slender stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It is edible, but has an acidic taste. In Britain, cranberry may refer to the native species Vaccinium oxycoccos, while in North America, cranberry may refer to V. macrocarpon. Vaccinium oxycoccos is cultivated in central and northern Europe, while V. macrocarpon is cultivated throughout the northern United States, Canada and Chile. In some methods of classification, Oxycoccus is regarded as a genus in its own right. Cranberries can be found in acidic bogs throughout the cooler regions of the Northern Hemisphere. In 2020, the U.S., Canada, and Chile accounted for 97% of the world production of cranberries. Most cranberries are processed into products such as juice, sauce, jam, and sweetened dried cranberries, with the remainder sold fresh to consumers. Cranberry sauce is a traditional accompaniment to turkey at Christmas and Thanksgiving dinners in the U.S. and Canada, and at Christmas dinner in the United Kingdom. Description and species Cranberries are low, creeping shrubs or vines up to long and in height; they have slender, wiry stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink, with very distinct reflexed petals, leaving the style and stamens fully exposed and pointing forward. They are pollinated by bees. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It has an acidic taste which usually overwhelms its sweetness. There are 4–5 species of cranberry, classified by subgenus: Subgenus Oxycoccus Subgenus Oxycoccus, sect. Oxycoccoides Similar species Cranberries are related to bilberries, blueberries, and huckleberries, all in Vaccinium subgenus Vaccinium. These differ in having bell-shaped flowers, petals that are not reflexed, and woodier stems, forming taller shrubs. Etymology The name cranberry derives from the Middle Low German kraanbere (English translation, craneberry), first named as cranberry in English by the missionary John Eliot in 1647. Around 1694, German and Dutch colonists in New England used the word, cranberry, to represent the expanding flower, stem, calyx, and petals resembling the neck, head, and bill of a crane. The traditional English name for the plant more common in Europe, Vaccinium oxycoccos, fenberry, originated from plants with small red berries found growing in fen (marsh) lands of England. Cultivation American Revolutionary War veteran Henry Hall first cultivated cranberries in the Cape Cod town of Dennis around 1816. In the 1820s, Hall was shipping cranberries to New York City and Boston from which shipments were also sent to Europe. In 1843, Eli Howes planted his own crop of cranberries on Cape Cod, using the "Howes" variety. In 1847, Cyrus Cahoon planted a crop of "Early Black" variety near Pleasant Lake, Harwich, Massachusetts. By 1900, were under cultivation in the New England region. In 2021, the total output of cranberries harvested in the United States was , with Wisconsin as the largest state producer (59% of total), followed by Massachusetts, New Jersey, and Oregon. Cranberries have had two major breeding events . The first occurred in the 1920s, with aims to create a crop that was more insect resistant, specifically to blunt-nosed leafhopper (Limotettix vaccini) the vector of cranberry false blossom disease. This resulted in cultivars such as "Stevens" and "Franklin". As such, cultivars like "Howes" tend to be more susceptible to insects as opposed to "Stevens". However, with the introduction of many broad-spectrum pesticides in the 1940s and 1950s, breeders eventually stopped breeding for pest resistance. Instead, beginning in the 1980s, cranberries were begun to be bred for high yielding varieties, leading to cultivars such as "Crimson Queen" and "Mullica Queen". Many of these varieties were spearheaded and bred by Dr. Nicholi Vorsa of Rutgers University. In more recent years, there have been heavier restrictions on pesticides due to environmental safety concerns, leading to a larger emphasis of high yield-high resistance varieties. Geography and bog method Historically, cranberry beds were constructed in wetlands. Today's cranberry beds are constructed in upland areas with a shallow water table. The topsoil is scraped off to form dykes around the bed perimeter. Clean sand is hauled in and spread to a depth of . The surface is laser leveled flat to provide even drainage. Beds are frequently drained with socked tile in addition to the perimeter ditch. In addition to making it possible to hold water, the dykes allow equipment to service the beds without driving on the vines. Irrigation equipment is installed in the bed to provide irrigation for vine growth and for spring and autumn frost protection. A common misconception about cranberry production is that the beds remain flooded throughout the year. During the growing season cranberry beds are not flooded, but are irrigated regularly to maintain soil moisture. Beds are flooded in the autumn to facilitate harvest and again during the winter to protect against low temperatures. In cold climates like Wisconsin, New England, and eastern Canada, the winter flood typically freezes into ice, while in warmer climates the water remains liquid. When ice forms on the beds, trucks can be driven onto the ice to spread a thin layer of sand to control pests and rejuvenate the vines. Sanding is done every three to five years. Propagation Cranberry vines are propagated by moving vines from an established bed. The vines are spread on the surface of the sand of the new bed and pushed into the sand with a blunt disk. The vines are watered frequently during the first few weeks until roots form and new shoots grow. Beds are given frequent, light application of nitrogen fertilizer during the first year. The cost of renovating cranberry beds is estimated to be between . Ripening and harvest Cranberries are harvested in the fall when the fruit takes on its distinctive deep red color, and most ideally after the first frost. Berries that receive sun turn a deep red when fully ripe, while those that do not fully mature are a pale pink or white color. This is usually in September through the first part of November. To harvest cranberries, the beds are flooded with of water above the vines. A harvester is driven through the beds to remove the fruit from the vines. For the past 50 years, water reel type harvesters have been used. Harvested cranberries float in the water and can be corralled into a corner of the bed and conveyed or pumped from the bed. From the farm, cranberries are taken to receiving stations where they are cleaned, sorted, and stored prior to packaging or processing. While cranberries are harvested when they take on their deep red color, they can also be harvested beforehand when they are still white, which is how white cranberry juice is made. Yields are lower on beds harvested early and the early flooding tends to damage vines, but not severely. Vines can also be trained through dry picking to help avoid damage in subsequent harvests. Although most cranberries are wet-picked as described above, 5–10% of the US crop is still dry-picked. This entails higher labor costs and lower yield, but dry-picked berries are less bruised and can be sold as fresh fruit instead of having to be immediately frozen or processed. Originally performed with two-handed comb scoops, dry picking is today accomplished by motorized, walk-behind harvesters which must be small enough to traverse beds without damaging the vines. Cranberries for fresh market are stored in shallow bins or boxes with perforated or slatted bottoms, which deter decay by allowing air to circulate. Because harvest occurs in late autumn, cranberries for fresh market are frequently stored in thick walled barns without mechanical refrigeration. Temperatures are regulated by opening and closing vents in the barn as needed. Cranberries destined for processing are usually frozen in bulk containers shortly after arriving at a receiving station. Diseases Diseases of cranberry include: Cranberry fruit rot Cranberry root rot Cranberry false blossom disease, caused by a phytoplasma that is vectored by the blunt-nosed leafhopper (Limotettix vaccinii), and prevents the plant from creating fertile flowers and thus berries. Insect Pests Probably due to the high phenolics and plant defenses, in addition to the harsh environments that cranberries are grown under (acid, sandy soils that get flooded every year), a majority of insect pests associated with cranberries are native to the cranberry's home range of North America. The top studied insect pests of cranberries include: Sparganothis sulfureana (Sparganothis fruitworm), a leafrolling moth Acrobasis vaccinii (cranberry fruitworm), a snout moth Rhopobota naevana (blackheaded fireworm), a leafrolling moth Choristoneura parallela (spotted fireworm), a leafrolling moth All four of these top studied insect pests are direct pests, eating the berries. Other well studied cranberry pests include: Limotettix vaccinii (blunt-nosed leafhopper), a leafhopper Lymantria dispar (spongy moth), an invasive moth Dasineura oxycoccana (cranberry tipworm), a gall-forming midge Chrysoteuchia topiarius (cranberry girdler), a snout moth Anthonomus musculus (cranberry weevil), a weevil Systena frontalis (red-headed flea beetle), a flea beetle Otiorhynchus sulcatus (black vine weevil), an invasive weevil Anomala orientalis (oriental beetle), an invasive scarab beetle As more and more pesticides become banned due to environmental concern, there are increased resurgences of secondary pests. Production In 2022, world production of cranberry was 582,924 tonnes, with the United States and Canada together accounting for 99% of the total. Wisconsin (59% of US production) and Quebec (60% of Canadian production) are two of the largest producers of cranberries in the two countries. Cranberries are also a major commercial crop in Massachusetts, New Jersey, Oregon, and Washington, as well as in the Canadian province of British Columbia (33% of Canadian production). Possible safety concerns The anticoagulant effects of warfarin may be increased by consuming cranberry juice, resulting in adverse effects such as increased incidence of bleeding and bruising. Other safety concerns from consuming large quantities of cranberry juice or using cranberry supplements include potential for nausea, and increasing stomach inflammation, sugar intake or kidney stone formation. Uses Nutrition Raw cranberries are 87% water, 12% carbohydrates, and contain negligible protein and fat (table). In a 100 gram reference amount, raw cranberries supply 46 calories and moderate levels of vitamin C, dietary fiber, and the essential dietary mineral manganese, each with more than 10% of its Daily Value. Other micronutrients have low content (table). Dried cranberries are commonly processed with up to 10 times their natural sugar content. The drying process also eliminates vitamin C content. History In North America, the Narragansett people of the Algonquian nation in the regions of New England appeared to be using cranberries in pemmican for food and for dye. Calling the red berries, sasemineash, the Narragansett people may have introduced cranberries to colonists in Massachusetts. In 1550, James White Norwood made reference to Native Americans using cranberries, and it was the first reference to American cranberries up until this point. In James Rosier's book The Land of Virginia there is an account of Europeans coming ashore and being met with Native Americans bearing bark cups full of cranberries. In Plymouth, Massachusetts, there is a 1633 account of the husband of Mary Ring auctioning her cranberry-dyed petticoat for 16 shillings. In 1643, Roger Williams's book A Key into the Language of America described cranberries, referring to them as "bearberries" because bears ate them. In 1648, preacher John Elliott was quoted in Thomas Shepard's book Clear Sunshine of the Gospel with an account of the difficulties the Pilgrims were having in using the Indians to harvest cranberries as they preferred to hunt and fish. In 1663, the Pilgrim cookbook appears with a recipe for cranberry sauce. In 1667, New Englanders sent to King Charles ten barrels of cranberries, three barrels of codfish and some Indian corn as a means of appeasement for his anger over their local coining of the pine tree shilling minted by John Hull. In 1669, Captain Richard Cobb had a banquet in his house (to celebrate both his marriage to Mary Gorham and his election to the Convention of Assistance), serving wild turkey with sauce made from wild cranberries. In the 1672 book New England Rarities Discovered author John Josselyn described cranberries, writing:Sauce for the Pilgrims, cranberry or bearberry, is a small trayling plant that grows in salt marshes that are overgrown with moss. The berries are of a pale yellow color, afterwards red, as big as a cherry, some perfectly round, others oval, all of them hollow with sower astringent taste; they are ripe in August and September. They are excellent against the Scurvy. They are also good to allay the fervor of hoof diseases. The Indians and English use them mush, boyling them with sugar for sauce to eat with their meat; and it is a delicate sauce, especially with roasted mutton. Some make tarts with them as with gooseberries. The Compleat Cook's Guide, published in 1683, made reference to cranberry juice. In 1703, cranberries were served at the Harvard University commencement dinner. In 1787, James Madison wrote Thomas Jefferson in France for background information on constitutional government to use at the Constitutional Convention. Jefferson sent back a number of books on the subject and in return asked for a gift of apples, pecans and cranberries. William Aiton, a Scottish botanist, included an entry for the cranberry in volume II of his 1789 work Hortus Kewensis. He notes that Vaccinium macrocarpon (American cranberry) was cultivated by James Gordon in 1760. In 1796, cranberries were served at the first celebration of the landing of the Pilgrims, and Amelia Simmons (an American orphan) wrote a book entitled American Cookery which contained a recipe for cranberry tarts. Products As fresh cranberries are hard, sour, and bitter, about 95% of cranberries are processed and used to make cranberry juice and sauce. They are also sold dried and sweetened. Cranberry juice is usually sweetened or blended with other fruit juices to reduce its natural tartness. At four teaspoons of sugar per 100 grams (one teaspoon per ounce), cranberry juice cocktail is more highly sweetened than even soda drinks that have been linked to obesity. Usually cranberries as fruit are cooked into a compote or jelly, known as cranberry sauce. Such preparations are traditionally served with roast turkey, as a staple of Thanksgiving (both in Canada and in the United States) as well as English dinners. The berry is also used in baking (muffins, scones, cakes and breads). In baking it is often combined with orange or orange zest. Less commonly, cranberries are used to add tartness to savory dishes such as soups and stews. Fresh cranberries can be frozen at home, and will keep up to nine months; they can be used directly in recipes without thawing. There are several alcoholic cocktails, including the cosmopolitan, that include cranberry juice. Urinary tract infections A 2023 Cochrane systematic review of 50 studies concluded there is evidence that consuming cranberry products (such as juice or capsules) is effective for reducing the risk of urinary tract infections (UTIs) in women with recurrent UTIs, in children, and in people susceptible to UTIs following clinical interventions; there was little evidence of effect in elderly people, those with urination disorders or pregnant women. When the quality of meta-analyses on the efficacy of consuming cranberry products for preventing or treating UTIs is examined with the weaker evidence that is available, large variation and uncertainty of effects are seen, resulting from inconsistencies of clinical research design and inadequate numbers of subjects. In 2014, the European Food Safety Authority reviewed the evidence for one brand of cranberry extract and concluded that a cause and effect relationship had not been established between cranberry consumption and reduced risk of UTIs. A 2022 review of international urology guidelines on UTI found that most clinical organizations felt the evidence for use of cranberry products to inhibit UTIs was conflicting, unconvincing or weak. Research Phytochemicals Raw cranberries, cranberry juice and cranberry extracts are a source of polyphenols – including proanthocyanidins, flavonols and quercetin. These phytochemical compounds are being studied in vivo and in vitro for possible effects on the cardiovascular system, immune system and cancer. However, there is no confirmation from human studies that consuming cranberry polyphenols provides anti-cancer, immune, or cardiovascular benefits. Potential is limited by poor absorption and rapid excretion. Cranberry juice contains a high molecular weight non-dializable material that is under research for its potential to affect formation of plaque by Streptococcus mutans pathogens that cause tooth decay. Cranberry juice components are also being studied for possible effects on kidney stone formation. Extract quality Problems may arise with the lack of validation for quantifying of A-type proanthocyanidins (PAC) extracted from cranberries. For instance, PAC extract quality and content can be performed using different methods including the European Pharmacopoeia method, liquid chromatography–mass spectrometry, or a modified 4-dimethylaminocinnamaldehyde colorimetric method. Variations in extract analysis can lead to difficulties in assessing the quality of PAC extracts from different cranberry starting material, such as by regional origin, ripeness at time of harvest and post-harvest processing. Assessments show that quality varies greatly from one commercial PAC extract product to another. Marketing and economics United States Cranberry sales in the United States have traditionally been associated with holidays of Thanksgiving and Christmas. In the U.S., large-scale cranberry cultivation has been developed as opposed to other countries. American cranberry growers have a long history of cooperative marketing. As early as 1904, John Gaynor, a Wisconsin grower, and A.U. Chaney, a fruit broker from Des Moines, Iowa, organized Wisconsin growers into a cooperative called the Wisconsin Cranberry Sales Company to receive a uniform price from buyers. Growers in New Jersey and Massachusetts were also organized into cooperatives, creating the National Fruit Exchange that marketed fruit under the Eatmor brand. The success of cooperative marketing almost led to its failure. With consistent and high prices, area and production doubled between 1903 and 1917 and prices fell. With surplus cranberries and changing American households some enterprising growers began canning cranberries that were below-grade for fresh market. Competition between canners was fierce because profits were thin. The Ocean Spray cooperative was established in 1930 through a merger of three primary processing companies: Ocean Spray Preserving company, Makepeace Preserving Co, and Cranberry Products Co. The new company was called Cranberry Canners, Inc. and used the Ocean Spray label on their products. Since the new company represented over 90% of the market, it would have been illegal under American antitrust laws had attorney John Quarles not found an exemption for agricultural cooperatives. , about 65% of the North American industry belongs to the Ocean Spray cooperative. In 1958, Morris April Brothers—who produced Eatmor brand cranberry sauce in Tuckahoe, New Jersey—brought an action against Ocean Spray for violation of the Sherman Antitrust Act and won $200,000 in real damages plus triple damages, just in time for the Great Cranberry Scare: on 9 November 1959, Secretary of the United States Department of Health, Education, and Welfare Arthur S. Flemming announced that some of the 1959 cranberry crop was tainted with traces of the herbicide aminotriazole. The market for cranberries collapsed and growers lost millions of dollars. However, the scare taught the industry that they could not be completely dependent on the holiday market for their products; they had to find year-round markets for their fruit. They also had to be exceedingly careful about their use of pesticides. After the aminotriazole scare, Ocean Spray reorganized and spent substantial sums on product development. New products such as cranberry-apple juice blends were introduced, followed by other juice blends. Prices and production increased steadily during the 1980s and 1990s. Prices peaked at about $65.00 per barrel ()—a cranberry barrel equals —in 1996 then fell to $18.00 per barrel () in 2001. The cause for the precipitous drop was classic oversupply. Production had outpaced consumption leading to substantial inventory in freezers or as concentrate. Cranberry handlers (processors) include Ocean Spray, Cliffstar Corporation, Northland Cranberries Inc. (Sun Northland LLC), Clement Pappas & Co., and Decas Cranberry Products as well as a number of small handlers and processors. Cranberry Marketing Committee The Cranberry Marketing Committee is an organization that was established in 1962 as a Federal Marketing Order to ensure a stable, orderly supply of good quality product. The order has been renewed and modified slightly over the years. The market order has been invoked during six crop years: 1962 (12%), 1963 (5%), 1970 (10%), 1971 (12%), 2000 (15%), and 2001 (35%). Even though supply still exceeds demand, there is little will to invoke the Federal Marketing Order out of the realization that any pullback in supply by U.S. growers would easily be filled by Canadian production. The Cranberry Marketing Committee, based in Wareham, Massachusetts, represents more than 1,100 cranberry growers and 60 cranberry handlers across Massachusetts, Rhode Island, Connecticut, New Jersey, Wisconsin, Michigan, Minnesota, Oregon, Washington and New York (Long Island). The authority for the actions taken by the Cranberry Marketing Committee is provided in Chapter IX, Title 7, Code of Federal Regulations which is called the Federal Cranberry Marketing Order. The Order is part of the Agricultural Marketing Agreement Act of 1937, identifying cranberries as a commodity good that can be regulated by Congress. The Federal Cranberry Marketing Order has been altered over the years to expand the Cranberry Marketing Committee's ability to develop projects in the United States and around the world. The Cranberry Marketing Committee currently runs promotional programs in the United States, China, India, Mexico, Pan-Europe, and South Korea. International trade , the European Union was the largest importer of American cranberries, followed individually by Canada, China, Mexico, and South Korea. From 2013 to 2017, U.S. cranberry exports to China grew exponentially, making China the second largest country importer, reaching $36 million in cranberry products. The China–United States trade war resulted in many Chinese businesses cutting off ties with their U.S. cranberry suppliers.
Biology and health sciences
Ericales
null
7034
https://en.wikipedia.org/wiki/Cruiser
Cruiser
A cruiser is a type of warship. Modern cruisers are generally the largest ships in a fleet after aircraft carriers and amphibious assault ships, and can usually perform several operational roles from search-and-destroy to ocean escort to sea denial. The term "cruiser", which has been in use for several hundred years, has changed its meaning over time. During the Age of Sail, the term cruising referred to certain kinds of missions—independent scouting, commerce protection, or raiding—usually fulfilled by frigates or sloops-of-war, which functioned as the cruising warships of a fleet. In the middle of the 19th century, cruiser came to be a classification of the ships intended for cruising distant waters, for commerce raiding, and for scouting for the battle fleet. Cruisers came in a wide variety of sizes, from the medium-sized protected cruiser to large armored cruisers that were nearly as big (although not as powerful or as well-armored) as a pre-dreadnought battleship. With the advent of the dreadnought battleship before World War I, the armored cruiser evolved into a vessel of similar scale known as the battlecruiser. The very large battlecruisers of the World War I era that succeeded armored cruisers were now classified, along with dreadnought battleships, as capital ships. By the early 20th century, after World War I, the direct successors to protected cruisers could be placed on a consistent scale of warship size, smaller than a battleship but larger than a destroyer. In 1922, the Washington Naval Treaty placed a formal limit on these cruisers, which were defined as warships of up to 10,000 tons displacement carrying guns no larger than 8 inches in calibre; whilst the 1930 London Naval Treaty created a divide of two cruiser types, heavy cruisers having 6.1 inches to 8 inch guns, while those with guns of 6.1 inches or less were light cruisers. Each type were limited in total and individual tonnage which shaped cruiser design until the collapse of the treaty system just prior to the start of World War II. Some variations on the Treaty cruiser design included the German "pocket battleships", which had heavier armament at the expense of speed compared to standard heavy cruisers, and the American , which was a scaled-up heavy cruiser design designated as a "cruiser-killer". In the later 20th century, the obsolescence of the battleship left the cruiser as the largest and most powerful surface combatant ships (as opposed to the aerial warfare role of aircraft carriers). The role of the cruiser varied according to ship and navy, often including air defense and shore bombardment. During the Cold War the Soviet Navy's cruisers had heavy anti-ship missile armament designed to sink NATO carrier task-forces via saturation attack. The U.S. Navy built guided-missile cruisers upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification) primarily designed to provide air defense while often adding anti-submarine capabilities, being larger and having longer-range surface-to-air missiles (SAMs) than early Charles F. Adams guided-missile destroyers tasked with the short-range air defense role. By the end of the Cold War the line between cruisers and destroyers had blurred, with the cruiser using the hull of the destroyer but receiving the cruiser designation due to their enhanced mission and combat systems. , only two countries operated active duty vessels formally classed as cruisers: the United States and Russia. These cruisers are primarily armed with guided missiles, with the exceptions of the aircraft cruiser . was the last gun cruiser in service, serving with the Peruvian Navy until 2017. Nevertheless, other classes in addition to the above may be considered cruisers due to differing classification systems. The US/NATO system includes the Type 055 from China and the Kirov and Slava from Russia. International Institute for Strategic Studies' "The Military Balance" defines a cruiser as a surface combatant displacing at least 9750 tonnes; with respect to vessels in service as of the early 2020s it includes the Type 055, the Sejong the Great from South Korea, the Atago and Maya from Japan and the Flight III Arleigh Burke, Ticonderoga and Zumwalt from the US. Early history The term "cruiser" or "cruizer" was first commonly used in the 17th century to refer to an independent warship. "Cruiser" meant the purpose or mission of a ship, rather than a category of vessel. However, the term was nonetheless used to mean a smaller, faster warship suitable for such a role. In the 17th century, the ship of the line was generally too large, inflexible, and expensive to be dispatched on long-range missions (for instance, to the Americas), and too strategically important to be put at risk of fouling and foundering by continual patrol duties. The Dutch navy was noted for its cruisers in the 17th century, while the Royal Navy—and later French and Spanish navies—subsequently caught up in terms of their numbers and deployment. The British Cruiser and Convoy Acts were an attempt by mercantile interests in Parliament to focus the Navy on commerce defence and raiding with cruisers, rather than the more scarce and expensive ships of the line. During the 18th century the frigate became the preeminent type of cruiser. A frigate was a small, fast, long range, lightly armed (single gun-deck) ship used for scouting, carrying dispatches, and disrupting enemy trade. The other principal type of cruiser was the sloop, but many other miscellaneous types of ship were used as well. Steam cruisers During the 19th century, navies began to use steam power for their fleets. The 1840s saw the construction of experimental steam-powered frigates and sloops. By the middle of the 1850s, the British and U.S. Navies were both building steam frigates with very long hulls and a heavy gun armament, for instance or . The 1860s saw the introduction of the ironclad. The first ironclads were frigates, in the sense of having one gun deck; however, they were also clearly the most powerful ships in the navy, and were principally to serve in the line of battle. In spite of their great speed, they would have been wasted in a cruising role. The French constructed a number of smaller ironclads for overseas cruising duties, starting with the , commissioned 1865. These "station ironclads" were the beginning of the development of the armored cruisers, a type of ironclad specifically for the traditional cruiser missions of fast, independent raiding and patrol. The first true armored cruiser was the Russian , completed in 1874, and followed by the British a few years later. Until the 1890s armored cruisers were still built with masts for a full sailing rig, to enable them to operate far from friendly coaling stations. Unarmored cruising warships, built out of wood, iron, steel or a combination of those materials, remained popular until towards the end of the 19th century. The ironclad's armor often meant that they were limited to short range under steam, and many ironclads were unsuited to long-range missions or for work in distant colonies. The unarmored cruiser—often a screw sloop or screw frigate—could continue in this role. Even though mid- to late-19th century cruisers typically carried up-to-date guns firing explosive shells, they were unable to face ironclads in combat. This was evidenced by the clash between , a modern British cruiser, and the Peruvian monitor Huáscar. Even though the Peruvian vessel was obsolete by the time of the encounter, it stood up well to roughly 50 hits from British shells. Steel cruisers In the 1880s, naval engineers began to use steel as a material for construction and armament. A steel cruiser could be lighter and faster than one built of iron or wood. The Jeune Ecole school of naval doctrine suggested that a fleet of fast unprotected steel cruisers were ideal for commerce raiding, while the torpedo boat would be able to destroy an enemy battleship fleet. Steel also offered the cruiser a way of acquiring the protection needed to survive in combat. Steel armor was considerably stronger, for the same weight, than iron. By putting a relatively thin layer of steel armor above the vital parts of the ship, and by placing the coal bunkers where they might stop shellfire, a useful degree of protection could be achieved without slowing the ship too much. Protected cruisers generally had an armored deck with sloped sides, providing similar protection to a light armored belt at less weight and expense. The first protected cruiser was the Chilean ship Esmeralda, launched in 1883. Produced by a shipyard at Elswick, in Britain, owned by Armstrong, she inspired a group of protected cruisers produced in the same yard and known as the "Elswick cruisers". Her forecastle, poop deck and the wooden board deck had been removed, replaced with an armored deck. Esmeraldas armament consisted of fore and aft 10-inch (25.4 cm) guns and 6-inch (15.2 cm) guns in the midships positions. It could reach a speed of , and was propelled by steam alone. It also had a displacement of less than 3,000 tons. During the two following decades, this cruiser type came to be the inspiration for combining heavy artillery, high speed and low displacement. Torpedo cruisers The torpedo cruiser (known in the Royal Navy as the torpedo gunboat) was a smaller unarmored cruiser, which emerged in the 1880s–1890s. These ships could reach speeds up to and were armed with medium to small calibre guns as well as torpedoes. These ships were tasked with guard and reconnaissance duties, to repeat signals and all other fleet duties for which smaller vessels were suited. These ships could also function as flagships of torpedo boat flotillas. After the 1900s, these ships were usually traded for faster ships with better sea going qualities. Pre-dreadnought armored cruisers Steel also affected the construction and role of armored cruisers. Steel meant that new designs of battleship, later known as pre-dreadnought battleships, would be able to combine firepower and armor with better endurance and speed than ever before. The armored cruisers of the 1890s and early 1900s greatly resembled the battleships of the day; they tended to carry slightly smaller main armament ( rather than 12-inch) and have somewhat thinner armor in exchange for a faster speed (perhaps rather than 18). Because of their similarity, the lines between battleships and armored cruisers became blurred. Early 20th century Shortly after the turn of the 20th century there were difficult questions about the design of future cruisers. Modern armored cruisers, almost as powerful as battleships, were also fast enough to outrun older protected and unarmored cruisers. In the Royal Navy, Jackie Fisher cut back hugely on older vessels, including many cruisers of different sorts, calling them "a miser's hoard of useless junk" that any modern cruiser would sweep from the seas. The scout cruiser also appeared in this era; this was a small, fast, lightly armed and armored type designed primarily for reconnaissance. The Royal Navy and the Italian Navy were the primary developers of this type. Battle cruisers The growing size and power of the armored cruiser resulted in the battlecruiser, with an armament and size similar to the revolutionary new dreadnought battleship; the brainchild of British admiral Jackie Fisher. He believed that to ensure British naval dominance in its overseas colonial possessions, a fleet of large, fast, powerfully armed vessels which would be able to hunt down and mop up enemy cruisers and armored cruisers with overwhelming fire superiority was needed. They were equipped with the same gun types as battleships, though usually with fewer guns, and were intended to engage enemy capital ships as well. This type of vessel came to be known as the battlecruiser, and the first were commissioned into the Royal Navy in 1907. The British battlecruisers sacrificed protection for speed, as they were intended to "choose their range" (to the enemy) with superior speed and only engage the enemy at long range. When engaged at moderate ranges, the lack of protection combined with unsafe ammunition handling practices became tragic with the loss of three of them at the Battle of Jutland. Germany and eventually Japan followed suit to build these vessels, replacing armored cruisers in most frontline roles. German battlecruisers were generally better protected but slower than British battlecruisers. Battlecruisers were in many cases larger and more expensive than contemporary battleships, due to their much larger propulsion plants. Light cruisers At around the same time as the battlecruiser was developed, the distinction between the armored and the unarmored cruiser finally disappeared. By the British , the first of which was launched in 1909, it was possible for a small, fast cruiser to carry both belt and deck armor, particularly when turbine engines were adopted. These light armored cruisers began to occupy the traditional cruiser role once it became clear that the battlecruiser squadrons were required to operate with the battle fleet. Flotilla leaders Some light cruisers were built specifically to act as the leaders of flotillas of destroyers. Coastguard cruisers These vessels were essentially large coastal patrol boats armed with multiple light guns. One such warship was Grivița of the Romanian Navy. She displaced 110 tons, measured 60 meters in length and was armed with four light guns. Auxiliary cruisers The auxiliary cruiser was a merchant ship hastily armed with small guns on the outbreak of war. Auxiliary cruisers were used to fill gaps in their long-range lines or provide escort for other cargo ships, although they generally proved to be useless in this role because of their low speed, feeble firepower and lack of armor. In both world wars the Germans also used small merchant ships armed with cruiser guns to surprise Allied merchant ships. Some large liners were armed in the same way. In British service these were known as Armed Merchant Cruisers (AMC). The Germans and French used them in World War I as raiders because of their high speed (around 30 knots (56 km/h)), and they were used again as raiders early in World War II by the Germans and Japanese. In both the First World War and in the early part of the Second, they were used as convoy escorts by the British. World War I Cruisers were one of the workhorse types of warship during World War I. By the time of World War I, cruisers had accelerated their development and improved their quality significantly, with drainage volume reaching 3000–4000 tons, a speed of 25–30 knots and a calibre of 127–152 mm. Mid-20th century Naval construction in the 1920s and 1930s was limited by international treaties designed to prevent the repetition of the Dreadnought arms race of the early 20th century. The Washington Naval Treaty of 1922 placed limits on the construction of ships with a standard displacement of more than 10,000 tons and an armament of guns larger than 8-inch (203 mm). A number of navies commissioned classes of cruisers at the top end of this limit, known as "treaty cruisers". The London Naval Treaty in 1930 then formalised the distinction between these "heavy" cruisers and light cruisers: a "heavy" cruiser was one with guns of more than 6.1-inch (155 mm) calibre. The Second London Naval Treaty attempted to reduce the tonnage of new cruisers to 8,000 or less, but this had little effect; Japan and Germany were not signatories, and some navies had already begun to evade treaty limitations on warships. The first London treaty did touch off a period of the major powers building 6-inch or 6.1-inch gunned cruisers, nominally of 10,000 tons and with up to fifteen guns, the treaty limit. Thus, most light cruisers ordered after 1930 were the size of heavy cruisers but with more and smaller guns. The Imperial Japanese Navy began this new race with the , launched in 1934. After building smaller light cruisers with six or eight 6-inch guns launched 1931–35, the British Royal Navy followed with the 12-gun in 1936. To match foreign developments and potential treaty violations, in the 1930s the US developed a series of new guns firing "super-heavy" armor piercing ammunition; these included the 6-inch (152 mm)/47 caliber gun Mark 16 introduced with the 15-gun s in 1936, and the 8-inch (203 mm)/55 caliber gun Mark 12 introduced with in 1937. Heavy cruisers The heavy cruiser was a type of cruiser designed for long range, high speed and an armament of naval guns around 203 mm (8 in) in calibre. The first heavy cruisers were built in 1915, although it only became a widespread classification following the London Naval Treaty in 1930. The heavy cruiser's immediate precursors were the light cruiser designs of the 1910s and 1920s; the US lightly armored 8-inch "treaty cruisers" of the 1920s (built under the Washington Naval Treaty) were originally classed as light cruisers until the London Treaty forced their redesignation. Initially, all cruisers built under the Washington treaty had torpedo tubes, regardless of nationality. However, in 1930, results of war games caused the US Naval War College to conclude that only perhaps half of cruisers would use their torpedoes in action. In a surface engagement, long-range gunfire and destroyer torpedoes would decide the issue, and under air attack numerous cruisers would be lost before getting within torpedo range. Thus, beginning with launched in 1933, new cruisers were built without torpedoes, and torpedoes were removed from older heavy cruisers due to the perceived hazard of their being exploded by shell fire. The Japanese took exactly the opposite approach with cruiser torpedoes, and this proved crucial to their tactical victories in most of the numerous cruiser actions of 1942. Beginning with the launched in 1925, every Japanese heavy cruiser was armed with torpedoes, larger than any other cruisers'. By 1933 Japan had developed the Type 93 torpedo for these ships, eventually nicknamed "Long Lance" by the Allies. This type used compressed oxygen instead of compressed air, allowing it to achieve ranges and speeds unmatched by other torpedoes. It could achieve a range of at , compared with the US Mark 15 torpedo with at . The Mark 15 had a maximum range of at , still well below the "Long Lance". The Japanese were able to keep the Type 93's performance and oxygen power secret until the Allies recovered one in early 1943, thus the Allies faced a great threat they were not aware of in 1942. The Type 93 was also fitted to Japanese post-1930 light cruisers and the majority of their World War II destroyers. Heavy cruisers continued in use until after World War II, with some converted to guided-missile cruisers for air defense or strategic attack and some used for shore bombardment by the United States in the Korean War and the Vietnam War. German pocket battleships The German was a series of three Panzerschiffe ("armored ships"), a form of heavily armed cruiser, designed and built by the German Reichsmarine in nominal accordance with restrictions imposed by the Treaty of Versailles. All three ships were launched between 1931 and 1934, and served with Germany's Kriegsmarine during World War II. Within the Kriegsmarine, the Panzerschiffe had the propaganda value of capital ships: heavy cruisers with battleship guns, torpedoes, and scout aircraft. (The similar Swedish Panzerschiffe were tactically used as centers of battlefleets and not as cruisers.) They were deployed by Nazi Germany in support of the German interests in the Spanish Civil War. Panzerschiff Admiral Graf Spee represented Germany in the 1937 Coronation Fleet Review. The British press referred to the vessels as pocket battleships, in reference to the heavy firepower contained in the relatively small vessels; they were considerably smaller than contemporary battleships, though at 28 knots were slower than battlecruisers. At up to 16,000 tons at full load, they were not treaty compliant 10,000 ton cruisers. And although their displacement and scale of armor protection were that of a heavy cruiser, their main armament was heavier than the guns of other nations' heavy cruisers, and the latter two members of the class also had tall conning towers resembling battleships. The Panzerschiffe were listed as Ersatz replacements for retiring Reichsmarine coastal defense battleships, which added to their propaganda status in the Kriegsmarine as Ersatz battleships; within the Royal Navy, only battlecruisers HMS Hood, HMS Repulse and HMS Renown were capable of both outrunning and outgunning the Panzerschiffe. They were seen in the 1930s as a new and serious threat by both Britain and France. While the Kriegsmarine reclassified them as heavy cruisers in 1940, Deutschland-class ships continued to be called pocket battleships in the popular press. Large cruiser The American represented the supersized cruiser design. Due to the German pocket battleships, the , and rumored Japanese "super cruisers", all of which carried guns larger than the standard heavy cruiser's 8-inch size dictated by naval treaty limitations, the Alaskas were intended to be "cruiser-killers". While superficially appearing similar to a battleship/battlecruiser and mounting three triple turrets of 12-inch guns, their actual protection scheme and design resembled a scaled-up heavy cruiser design. Their hull classification symbol of CB (cruiser, big) reflected this. Anti-aircraft cruisers A precursor to the anti-aircraft cruiser was the Romanian British-built protected cruiser Elisabeta. After the start of World War I, her four 120 mm main guns were landed and her four 75 mm (12-pounder) secondary guns were modified for anti-aircraft fire. The development of the anti-aircraft cruiser began in 1935 when the Royal Navy re-armed and . Torpedo tubes and low-angle guns were removed from these World War I light cruisers and replaced with ten high-angle guns, with appropriate fire-control equipment to provide larger warships with protection against high-altitude bombers. A tactical shortcoming was recognised after completing six additional conversions of s. Having sacrificed anti-ship weapons for anti-aircraft armament, the converted anti-aircraft cruisers might themselves need protection against surface units. New construction was undertaken to create cruisers of similar speed and displacement with dual-purpose guns, which offered good anti-aircraft protection with anti-surface capability for the traditional light cruiser role of defending capital ships from destroyers. The first purpose built anti-aircraft cruiser was the British , completed in 1940–42. The US Navy's cruisers (CLAA: light cruiser with anti-aircraft capability) were designed to match the capabilities of the Royal Navy. Both Dido and Atlanta cruisers initially carried torpedo tubes; the Atlanta cruisers at least were originally designed as destroyer leaders, were originally designated CL (light cruiser), and did not receive the CLAA designation until 1949. The concept of the quick-firing dual-purpose gun anti-aircraft cruiser was embraced in several designs completed too late to see combat, including: , completed in 1948; , completed in 1949; two s, completed in 1947; two s, completed in 1953; , completed in 1955; , completed in 1959; and , and , all completed between 1959 and 1961. Most post-World War II cruisers were tasked with air defense roles. In the early 1950s, advances in aviation technology forced the move from anti-aircraft artillery to anti-aircraft missiles. Therefore, most modern cruisers are equipped with surface-to-air missiles as their main armament. Today's equivalent of the anti-aircraft cruiser is the guided-missile cruiser (CAG/CLG/CG/CGN). World War II Cruisers participated in a number of surface engagements in the early part of World War II, along with escorting carrier and battleship groups throughout the war. In the later part of the war, Allied cruisers primarily provided anti-aircraft (AA) escort for carrier groups and performed shore bombardment. Japanese cruisers similarly escorted carrier and battleship groups in the later part of the war, notably in the disastrous Battle of the Philippine Sea and Battle of Leyte Gulf. In 1937–41 the Japanese, having withdrawn from all naval treaties, upgraded or completed the Mogami and es as heavy cruisers by replacing their triple turrets with twin turrets. Torpedo refits were also made to most heavy cruisers, resulting in up to sixteen tubes per ship, plus a set of reloads. In 1941 the 1920s light cruisers and were converted to torpedo cruisers with four guns and forty torpedo tubes. In 1944 Kitakami was further converted to carry up to eight Kaiten human torpedoes in place of ordinary torpedoes. Before World War II, cruisers were mainly divided into three types: heavy cruisers, light cruisers and auxiliary cruisers. Heavy cruiser tonnage reached 20–30,000 tons, speed 32–34 knots, endurance of more than 10,000 nautical miles, armor thickness of 127–203 mm. Heavy cruisers were equipped with eight or nine guns with a range of more than 20 nautical miles. They were mainly used to attack enemy surface ships and shore-based targets. In addition, there were 10–16 secondary guns with a caliber of less than . Also, dozens of automatic antiaircraft guns were installed to fight aircraft and small vessels such as torpedo boats. For example, in World War II, American Alaska-class cruisers were more than 30,000 tons, equipped with nine guns. Some cruisers could also carry three or four seaplanes to correct the accuracy of gunfire and perform reconnaissance. Together with battleships, these heavy cruisers formed powerful naval task forces, which dominated the world's oceans for more than a century. After the signing of the Washington Treaty on Arms Limitation in 1922, the tonnage and quantity of battleships, aircraft carriers and cruisers were severely restricted. In order not to violate the treaty, countries began to develop light cruisers. Light cruisers of the 1920s had displacements of less than 10,000 tons and a speed of up to 35 knots. They were equipped with 6–12 main guns with a caliber of 127–133 mm (5–5.5 inches). In addition, they were equipped with 8–12 secondary guns under 127 mm (5 in) and dozens of small caliber cannons, as well as torpedoes and mines. Some ships also carried 2–4 seaplanes, mainly for reconnaissance. In 1930 the London Naval Treaty allowed large light cruisers to be built, with the same tonnage as heavy cruisers and armed with up to fifteen guns. The Japanese Mogami class were built to this treaty's limit, the Americans and British also built similar ships. However, in 1939 the Mogamis were refitted as heavy cruisers with ten guns. 1939 to Pearl Harbor In December 1939, three British cruisers engaged the German "pocket battleship" Admiral Graf Spee (which was on a commerce raiding mission) in the Battle of the River Plate; German cruiser Admiral Graf Spee then took refuge in neutral Montevideo, Uruguay. By broadcasting messages indicating capital ships were in the area, the British caused Admiral Graf Spees captain to think he faced a hopeless situation while low on ammunition and order his ship scuttled. On 8 June 1940 the German capital ships and , classed as battleships but with large cruiser armament, sank the aircraft carrier with gunfire. From October 1940 through March 1941 the German heavy cruiser (also known as "pocket battleship", see above) conducted a successful commerce-raiding voyage in the Atlantic and Indian Oceans. On 27 May 1941, attempted to finish off the German battleship with torpedoes, probably causing the Germans to scuttle the ship. Bismarck (accompanied by the heavy cruiser ) previously sank the battlecruiser and damaged the battleship with gunfire in the Battle of the Denmark Strait. On 19 November 1941 sank in a mutually fatal engagement with the German raider Kormoran in the Indian Ocean near Western Australia. Atlantic, Mediterranean, and Indian Ocean operations 1942–1944 Twenty-three British cruisers were lost to enemy action, mostly to air attack and submarines, in operations in the Atlantic, Mediterranean, and Indian Ocean. Sixteen of these losses were in the Mediterranean. The British included cruisers and anti-aircraft cruisers among convoy escorts in the Mediterranean and to northern Russia due to the threat of surface and air attack. Almost all cruisers in World War II were vulnerable to submarine attack due to a lack of anti-submarine sonar and weapons. Also, until 1943–44 the light anti-aircraft armament of most cruisers was weak. In July 1942 an attempt to intercept Convoy PQ 17 with surface ships, including the heavy cruiser Admiral Scheer, failed due to multiple German warships grounding, but air and submarine attacks sank 2/3 of the convoy's ships. In August 1942 Admiral Scheer conducted Operation Wunderland, a solo raid into northern Russia's Kara Sea. She bombarded Dikson Island but otherwise had little success. On 31 December 1942 the Battle of the Barents Sea was fought, a rare action for a Murmansk run because it involved cruisers on both sides. Four British destroyers and five other vessels were escorting Convoy JW 51B from the UK to the Murmansk area. Another British force of two cruisers ( and ) and two destroyers were in the area. Two heavy cruisers (one the "pocket battleship" Lützow), accompanied by six destroyers, attempted to intercept the convoy near North Cape after it was spotted by a U-boat. Although the Germans sank a British destroyer and a minesweeper (also damaging another destroyer), they failed to damage any of the convoy's merchant ships. A German destroyer was lost and a heavy cruiser damaged. Both sides withdrew from the action for fear of the other side's torpedoes. On 26 December 1943 the German capital ship Scharnhorst was sunk while attempting to intercept a convoy in the Battle of the North Cape. The British force that sank her was led by Vice Admiral Bruce Fraser in the battleship , accompanied by four cruisers and nine destroyers. One of the cruisers was the preserved . Scharnhorsts sister Gneisenau, damaged by a mine and a submerged wreck in the Channel Dash of 13 February 1942 and repaired, was further damaged by a British air attack on 27 February 1942. She began a conversion process to mount six guns instead of nine guns, but in early 1943 Hitler (angered by the recent failure at the Battle of the Barents Sea) ordered her disarmed and her armament used as coast defence weapons. One 28 cm triple turret survives near Trondheim, Norway. Pearl Harbor through Dutch East Indies campaign The attack on Pearl Harbor on 7 December 1941 brought the United States into the war, but with eight battleships sunk or damaged by air attack. On 10 December 1941 HMS Prince of Wales and the battlecruiser were sunk by land-based torpedo bombers northeast of Singapore. It was now clear that surface ships could not operate near enemy aircraft in daylight without air cover; most surface actions of 1942–43 were fought at night as a result. Generally, both sides avoided risking their battleships until the Japanese attack at Leyte Gulf in 1944. Six of the battleships from Pearl Harbor were eventually returned to service, but no US battleships engaged Japanese surface units at sea until the Naval Battle of Guadalcanal in November 1942, and not thereafter until the Battle of Surigao Strait in October 1944. was on hand for the initial landings at Guadalcanal on 7 August 1942, and escorted carriers in the Battle of the Eastern Solomons later that month. However, on 15 September she was torpedoed while escorting a carrier group and had to return to the US for repairs. Generally, the Japanese held their capital ships out of all surface actions in the 1941–42 campaigns or they failed to close with the enemy; the Naval Battle of Guadalcanal in November 1942 was the sole exception. The four ships performed shore bombardment in Malaya, Singapore, and Guadalcanal and escorted the raid on Ceylon and other carrier forces in 1941–42. Japanese capital ships also participated ineffectively (due to not being engaged) in the Battle of Midway and the simultaneous Aleutian diversion; in both cases they were in battleship groups well to the rear of the carrier groups. Sources state that sat out the entire Guadalcanal Campaign due to lack of high-explosive bombardment shells, poor nautical charts of the area, and high fuel consumption. It is likely that the poor charts affected other battleships as well. Except for the Kongō class, most Japanese battleships spent the critical year of 1942, in which most of the war's surface actions occurred, in home waters or at the fortified base of Truk, far from any risk of attacking or being attacked. From 1942 through mid-1943, US and other Allied cruisers were the heavy units on their side of the numerous surface engagements of the Dutch East Indies campaign, the Guadalcanal Campaign, and subsequent Solomon Islands fighting; they were usually opposed by strong Japanese cruiser-led forces equipped with Long Lance torpedoes. Destroyers also participated heavily on both sides of these battles and provided essentially all the torpedoes on the Allied side, with some battles in these campaigns fought entirely between destroyers. Along with lack of knowledge of the capabilities of the Long Lance torpedo, the US Navy was hampered by a deficiency it was initially unaware of—the unreliability of the Mark 15 torpedo used by destroyers. This weapon shared the Mark 6 exploder and other problems with the more famously unreliable Mark 14 torpedo; the most common results of firing either of these torpedoes were a dud or a miss. The problems with these weapons were not solved until mid-1943, after almost all of the surface actions in the Solomon Islands had taken place. Another factor that shaped the early surface actions was the pre-war training of both sides. The US Navy concentrated on long-range 8-inch gunfire as their primary offensive weapon, leading to rigid battle line tactics, while the Japanese trained extensively for nighttime torpedo attacks. Since all post-1930 Japanese cruisers had 8-inch guns by 1941, almost all of the US Navy's cruisers in the South Pacific in 1942 were the 8-inch-gunned (203 mm) "treaty cruisers"; most of the 6-inch-gunned (152 mm) cruisers were deployed in the Atlantic. Dutch East Indies campaign Although their battleships were held out of surface action, Japanese cruiser-destroyer forces rapidly isolated and mopped up the Allied naval forces in the Dutch East Indies campaign of February–March 1942. In three separate actions, they sank five Allied cruisers (two Dutch and one each British, Australian, and American) with torpedoes and gunfire, against one Japanese cruiser damaged. With one other Allied cruiser withdrawn for repairs, the only remaining Allied cruiser in the area was the damaged . Despite their rapid success, the Japanese proceeded methodically, never leaving their air cover and rapidly establishing new air bases as they advanced. Guadalcanal campaign After the key carrier battles of the Coral Sea and Midway in mid-1942, Japan had lost four of the six fleet carriers that launched the Pearl Harbor raid and was on the strategic defensive. On 7 August 1942 US Marines were landed on Guadalcanal and other nearby islands, beginning the Guadalcanal Campaign. This campaign proved to be a severe test for the Navy as well as the Marines. Along with two carrier battles, several major surface actions occurred, almost all at night between cruiser-destroyer forces. Battle of Savo Island On the night of 8–9 August 1942 the Japanese counterattacked near Guadalcanal in the Battle of Savo Island with a cruiser-destroyer force. In a controversial move, the US carrier task forces were withdrawn from the area on the 8th due to heavy fighter losses and low fuel. The Allied force included six heavy cruisers (two Australian), two light cruisers (one Australian), and eight US destroyers. Of the cruisers, only the Australian ships had torpedoes. The Japanese force included five heavy cruisers, two light cruisers, and one destroyer. Numerous circumstances combined to reduce Allied readiness for the battle. The results of the battle were three American heavy cruisers sunk by torpedoes and gunfire, one Australian heavy cruiser disabled by gunfire and scuttled, one heavy cruiser damaged, and two US destroyers damaged. The Japanese had three cruisers lightly damaged. This was the most lopsided outcome of the surface actions in the Solomon Islands. Along with their superior torpedoes, the opening Japanese gunfire was accurate and very damaging. Subsequent analysis showed that some of the damage was due to poor housekeeping practices by US forces. Stowage of boats and aircraft in midships hangars with full gas tanks contributed to fires, along with full and unprotected ready-service ammunition lockers for the open-mount secondary armament. These practices were soon corrected, and US cruisers with similar damage sank less often thereafter. Savo was the first surface action of the war for almost all the US ships and personnel; few US cruisers and destroyers were targeted or hit at Coral Sea or Midway. Battle of the Eastern Solomons On 24–25 August 1942 the Battle of the Eastern Solomons, a major carrier action, was fought. Part of the action was a Japanese attempt to reinforce Guadalcanal with men and equipment on troop transports. The Japanese troop convoy was attacked by Allied aircraft, resulting in the Japanese subsequently reinforcing Guadalcanal with troops on fast warships at night. These convoys were called the "Tokyo Express" by the Allies. Although the Tokyo Express often ran unopposed, most surface actions in the Solomons revolved around Tokyo Express missions. Also, US air operations had commenced from Henderson Field, the airfield on Guadalcanal. Fear of air power on both sides resulted in all surface actions in the Solomons being fought at night. Battle of Cape Esperance The Battle of Cape Esperance occurred on the night of 11–12 October 1942. A Tokyo Express mission was underway for Guadalcanal at the same time as a separate cruiser-destroyer bombardment group loaded with high explosive shells for bombarding Henderson Field. A US cruiser-destroyer force was deployed in advance of a convoy of US Army troops for Guadalcanal that was due on 13 October. The Tokyo Express convoy was two seaplane tenders and six destroyers; the bombardment group was three heavy cruisers and two destroyers, and the US force was two heavy cruisers, two light cruisers, and five destroyers. The US force engaged the Japanese bombardment force; the Tokyo Express convoy was able to unload on Guadalcanal and evade action. The bombardment force was sighted at close range () and the US force opened fire. The Japanese were surprised because their admiral was anticipating sighting the Tokyo Express force, and withheld fire while attempting to confirm the US ships' identity. One Japanese cruiser and one destroyer were sunk and one cruiser damaged, against one US destroyer sunk with one light cruiser and one destroyer damaged. The bombardment force failed to bring its torpedoes into action, and turned back. The next day US aircraft from Henderson Field attacked several of the Japanese ships, sinking two destroyers and damaging a third. The US victory resulted in overconfidence in some later battles, reflected in the initial after-action report claiming two Japanese heavy cruisers, one light cruiser, and three destroyers sunk by the gunfire of alone. The battle had little effect on the overall situation, as the next night two Kongō-class battleships bombarded and severely damaged Henderson Field unopposed, and the following night another Tokyo Express convoy delivered 4,500 troops to Guadalcanal. The US convoy delivered the Army troops as scheduled on the 13th. Battle of the Santa Cruz Islands The Battle of the Santa Cruz Islands took place 25–27 October 1942. It was a pivotal battle, as it left the US and Japanese with only two large carriers each in the South Pacific (another large Japanese carrier was damaged and under repair until May 1943). Due to the high carrier attrition rate with no replacements for months, for the most part both sides stopped risking their remaining carriers until late 1943, and each side sent in a pair of battleships instead. The next major carrier operations for the US were the carrier raid on Rabaul and support for the invasion of Tarawa, both in November 1943. Naval Battle of Guadalcanal The Naval Battle of Guadalcanal occurred 12–15 November 1942 in two phases. A night surface action on 12–13 November was the first phase. The Japanese force consisted of two Kongō-class battleships with high explosive shells for bombarding Henderson Field, one small light cruiser, and 11 destroyers. Their plan was that the bombardment would neutralize Allied airpower and allow a force of 11 transport ships and 12 destroyers to reinforce Guadalcanal with a Japanese division the next day. However, US reconnaissance aircraft spotted the approaching Japanese on the 12th and the Americans made what preparations they could. The American force consisted of two heavy cruisers, one light cruiser, two anti-aircraft cruisers, and eight destroyers. The Americans were outgunned by the Japanese that night, and a lack of pre-battle orders by the US commander led to confusion. The destroyer closed with the battleship , firing all torpedoes (though apparently none hit or detonated) and raking the battleship's bridge with gunfire, wounding the Japanese admiral and killing his chief of staff. The Americans initially lost four destroyers including Laffey, with both heavy cruisers, most of the remaining destroyers, and both anti-aircraft cruisers damaged. The Japanese initially had one battleship and four destroyers damaged, but at this point they withdrew, possibly unaware that the US force was unable to further oppose them. At dawn US aircraft from Henderson Field, , and Espiritu Santo found the damaged battleship and two destroyers in the area. The battleship (Hiei) was sunk by aircraft (or possibly scuttled), one destroyer was sunk by the damaged , and the other destroyer was attacked by aircraft but was able to withdraw. Both of the damaged US anti-aircraft cruisers were lost on 13 November, one () torpedoed by a Japanese submarine, and the other sank on the way to repairs. Juneaus loss was especially tragic; the submarine's presence prevented immediate rescue, over 100 survivors of a crew of nearly 700 were adrift for eight days, and all but ten died. Among the dead were the five Sullivan brothers. The Japanese transport force was rescheduled for the 14th and a new cruiser-destroyer force (belatedly joined by the surviving battleship ) was sent to bombard Henderson Field the night of 13 November. Only two cruisers actually bombarded the airfield, as Kirishima had not arrived yet and the remainder of the force was on guard for US warships. The bombardment caused little damage. The cruiser-destroyer force then withdrew, while the transport force continued towards Guadalcanal. Both forces were attacked by US aircraft on the 14th. The cruiser force lost one heavy cruiser sunk and one damaged. Although the transport force had fighter cover from the carrier , six transports were sunk and one heavily damaged. All but four of the destroyers accompanying the transport force picked up survivors and withdrew. The remaining four transports and four destroyers approached Guadalcanal at night, but stopped to await the results of the night's action. On the night of 14–15 November a Japanese force of Kirishima, two heavy and two light cruisers, and nine destroyers approached Guadalcanal. Two US battleships ( and ) were there to meet them, along with four destroyers. This was one of only two battleship-on-battleship encounters during the Pacific War; the other was the lopsided Battle of Surigao Strait in October 1944, part of the Battle of Leyte Gulf. The battleships had been escorting Enterprise, but were detached due to the urgency of the situation. With nine 16-inch (406 mm) guns apiece against eight 14-inch (356 mm) guns on Kirishima, the Americans had major gun and armor advantages. All four destroyers were sunk or severely damaged and withdrawn shortly after the Japanese attacked them with gunfire and torpedoes. Although her main battery remained in action for most of the battle, South Dakota spent much of the action dealing with major electrical failures that affected her radar, fire control, and radio systems. Although her armor was not penetrated, she was hit by 26 shells of various calibers and temporarily rendered, in a US admiral's words, "deaf, dumb, blind, and impotent". Washington went undetected by the Japanese for most of the battle, but withheld shooting to avoid "friendly fire" until South Dakota was illuminated by Japanese fire, then rapidly set Kirishima ablaze with a jammed rudder and other damage. Washington, finally spotted by the Japanese, then headed for the Russell Islands to hopefully draw the Japanese away from Guadalcanal and South Dakota, and was successful in evading several torpedo attacks. Unusually, only a few Japanese torpedoes scored hits in this engagement. Kirishima sank or was scuttled before the night was out, along with two Japanese destroyers. The remaining Japanese ships withdrew, except for the four transports, which beached themselves in the night and started unloading. However, dawn (and US aircraft, US artillery, and a US destroyer) found them still beached, and they were destroyed. Battle of Tassafaronga The Battle of Tassafaronga took place on the night of 30 November – 1 December 1942. The US had four heavy cruisers, one light cruiser, and four destroyers. The Japanese had eight destroyers on a Tokyo Express run to deliver food and supplies in drums to Guadalcanal. The Americans achieved initial surprise, damaging one destroyer with gunfire which later sank, but the Japanese torpedo counterattack was devastating. One American heavy cruiser was sunk and three others heavily damaged, with the bows blown off of two of them. It was significant that these two were not lost to Long Lance hits as happened in previous battles; American battle readiness and damage control had improved. Despite defeating the Americans, the Japanese withdrew without delivering the crucial supplies to Guadalcanal. Another attempt on 3 December dropped 1,500 drums of supplies near Guadalcanal, but Allied strafing aircraft sank all but 300 before the Japanese Army could recover them. On 7 December PT boats interrupted a Tokyo Express run, and the following night sank a Japanese supply submarine. The next day the Japanese Navy proposed stopping all destroyer runs to Guadalcanal, but agreed to do just one more. This was on 11 December and was also intercepted by PT boats, which sank a destroyer; only 200 of 1,200 drums dropped off the island were recovered. The next day the Japanese Navy proposed abandoning Guadalcanal; this was approved by the Imperial General Headquarters on 31 December and the Japanese left the island in early February 1943. Post-Guadalcanal After the Japanese abandoned Guadalcanal in February 1943, Allied operations in the Pacific shifted to the New Guinea campaign and isolating Rabaul. The Battle of Kula Gulf was fought on the night of 5–6 July. The US had three light cruisers and four destroyers; the Japanese had ten destroyers loaded with 2,600 troops destined for Vila to oppose a recent US landing on Rendova. Although the Japanese sank a cruiser, they lost two destroyers and were able to deliver only 850 troops. On the night of 12–13 July, the Battle of Kolombangara occurred. The Allies had three light cruisers (one New Zealand) and ten destroyers; the Japanese had one small light cruiser and five destroyers, a Tokyo Express run for Vila. All three Allied cruisers were heavily damaged, with the New Zealand cruiser put out of action for 25 months by a Long Lance hit. The Allies sank only the Japanese light cruiser, and the Japanese landed 1,200 troops at Vila. Despite their tactical victory, this battle caused the Japanese to use a different route in the future, where they were more vulnerable to destroyer and PT boat attacks. The Battle of Empress Augusta Bay was fought on the night of 1–2 November 1943, immediately after US Marines invaded Bougainville in the Solomon Islands. A Japanese heavy cruiser was damaged by a nighttime air attack shortly before the battle; it is likely that Allied airborne radar had progressed far enough to allow night operations. The Americans had four of the new cruisers and eight destroyers. The Japanese had two heavy cruisers, two small light cruisers, and six destroyers. Both sides were plagued by collisions, shells that failed to explode, and mutual skill in dodging torpedoes. The Americans suffered significant damage to three destroyers and light damage to a cruiser, but no losses. The Japanese lost one light cruiser and a destroyer, with four other ships damaged. The Japanese withdrew; the Americans pursued them until dawn, then returned to the landing area to provide anti-aircraft cover. After the Battle of the Santa Cruz Islands in October 1942, both sides were short of large aircraft carriers. The US suspended major carrier operations until sufficient carriers could be completed to destroy the entire Japanese fleet at once should it appear. The Central Pacific carrier raids and amphibious operations commenced in November 1943 with a carrier raid on Rabaul (preceded and followed by Fifth Air Force attacks) and the bloody but successful invasion of Tarawa. The air attacks on Rabaul crippled the Japanese cruiser force, with four heavy and two light cruisers damaged; they were withdrawn to Truk. The US had built up a force in the Central Pacific of six large, five light, and six escort carriers prior to commencing these operations. From this point on, US cruisers primarily served as anti-aircraft escorts for carriers and in shore bombardment. The only major Japanese carrier operation after Guadalcanal was the disastrous (for Japan) Battle of the Philippine Sea in June 1944, nicknamed the "Marianas Turkey Shoot" by the US Navy. Leyte Gulf The Imperial Japanese Navy's last major operation was the Battle of Leyte Gulf, an attempt to dislodge the American invasion of the Philippines in October 1944. The two actions at this battle in which cruisers played a significant role were the Battle off Samar and the Battle of Surigao Strait. Battle of Surigao Strait The Battle of Surigao Strait was fought on the night of 24–25 October, a few hours before the Battle off Samar. The Japanese had a small battleship group composed of and , one heavy cruiser, and four destroyers. They were followed at a considerable distance by another small force of two heavy cruisers, a small light cruiser, and four destroyers. Their goal was to head north through Surigao Strait and attack the invasion fleet off Leyte. The Allied force, known as the 7th Fleet Support Force, guarding the strait was overwhelming. It included six battleships (all but one previously damaged in 1941 at Pearl Harbor), four heavy cruisers (one Australian), four light cruisers, and 28 destroyers, plus a force of 39 PT boats. The only advantage to the Japanese was that most of the Allied battleships and cruisers were loaded mainly with high explosive shells, although a significant number of armor-piercing shells were also loaded. The lead Japanese force evaded the PT boats' torpedoes, but were hit hard by the destroyers' torpedoes, losing a battleship. Then they encountered the battleship and cruiser guns. Only one destroyer survived. The engagement is notable for being one of only two occasions in which battleships fired on battleships in the Pacific Theater, the other being the Naval Battle of Guadalcanal. Due to the starting arrangement of the opposing forces, the Allied force was in a "crossing the T" position, so this was the last battle in which this occurred, but it was not a planned maneuver. The following Japanese cruiser force had several problems, including a light cruiser damaged by a PT boat and two heavy cruisers colliding, one of which fell behind and was sunk by air attack the next day. An American veteran of Surigao Strait, , was transferred to Argentina in 1951 as , becoming most famous for being sunk by in the Falklands War on 2 May 1982. She was the first ship sunk by a nuclear submarine outside of accidents, and only the second ship sunk by a submarine since World War II. Battle off Samar At the Battle off Samar, a Japanese battleship group moving towards the invasion fleet off Leyte engaged a minuscule American force known as "Taffy 3" (formally Task Unit 77.4.3), composed of six escort carriers with about 28 aircraft each, three destroyers, and four destroyer escorts. The biggest guns in the American force were /38 caliber guns, while the Japanese had , , and guns. Aircraft from six additional escort carriers also participated for a total of around 330 US aircraft, a mix of F6F Hellcat fighters and TBF Avenger torpedo bombers. The Japanese had four battleships including Yamato, six heavy cruisers, two small light cruisers, and 11 destroyers. The Japanese force had earlier been driven off by air attack, losing Yamatos sister . Admiral Halsey then decided to use his Third Fleet carrier force to attack the Japanese carrier group, located well to the north of Samar, which was actually a decoy group with few aircraft. The Japanese were desperately short of aircraft and pilots at this point in the war, and Leyte Gulf was the first battle in which kamikaze attacks were used. Due to a tragedy of errors, Halsey took the American battleship force with him, leaving San Bernardino Strait guarded only by the small Seventh Fleet escort carrier force. The battle commenced at dawn on 25 October 1944, shortly after the Battle of Surigao Strait. In the engagement that followed, the Americans exhibited uncanny torpedo accuracy, blowing the bows off several Japanese heavy cruisers. The escort carriers' aircraft also performed very well, attacking with machine guns after their carriers ran out of bombs and torpedoes. The unexpected level of damage, and maneuvering to avoid the torpedoes and air attacks, disorganized the Japanese and caused them to think they faced at least part of the Third Fleet's main force. They had also learned of the defeat a few hours before at Surigao Strait, and did not hear that Halsey's force was busy destroying the decoy fleet. Convinced that the rest of the Third Fleet would arrive soon if it hadn't already, the Japanese withdrew, eventually losing three heavy cruisers sunk with three damaged to air and torpedo attacks. The Americans lost two escort carriers, two destroyers, and one destroyer escort sunk, with three escort carriers, one destroyer, and two destroyer escorts damaged, thus losing over one-third of their engaged force sunk with nearly all the remainder damaged. Wartime cruiser production The US built cruisers in quantity through the end of the war, notably 14 heavy cruisers and 27 Cleveland-class light cruisers, along with eight Atlanta-class anti-aircraft cruisers. The Cleveland class was the largest cruiser class ever built in number of ships completed, with nine additional Clevelands completed as light aircraft carriers. The large number of cruisers built was probably due to the significant cruiser losses of 1942 in the Pacific theater (seven American and five other Allied) and the perceived need for several cruisers to escort each of the numerous s being built. Losing four heavy and two small light cruisers in 1942, the Japanese built only five light cruisers during the war; these were small ships with six guns each. Losing 20 cruisers in 1940–42, the British completed no heavy cruisers, thirteen light cruisers ( and classes), and sixteen anti-aircraft cruisers (Dido class) during the war. Late 20th century The rise of air power during World War II dramatically changed the nature of naval combat. Even the fastest cruisers could not maneuver quickly enough to evade aerial attack, and aircraft now had torpedoes, allowing moderate-range standoff capabilities. This change led to the end of independent operations by single ships or very small task groups, and for the second half of the 20th century naval operations were based on very large fleets believed able to fend off all but the largest air attacks, though this was not tested by any war in that period. The US Navy became centered around carrier groups, with cruisers and battleships primarily providing anti-aircraft defense and shore bombardment. Until the Harpoon missile entered service in the late 1970s, the US Navy was almost entirely dependent on carrier-based aircraft and submarines for conventionally attacking enemy warships. Lacking aircraft carriers, the Soviet Navy depended on anti-ship cruise missiles; in the 1950s these were primarily delivered from heavy land-based bombers. Soviet submarine-launched cruise missiles at the time were primarily for land attack; but by 1964 anti-ship missiles were deployed in quantity on cruisers, destroyers, and submarines. US cruiser development The US Navy was aware of the potential missile threat as soon as World War II ended, and had considerable related experience due to Japanese kamikaze attacks in that war. The initial response was to upgrade the light AA armament of new cruisers from 40 mm and 20 mm weapons to twin 3-inch (76 mm)/50 caliber gun mounts. For the longer term, it was thought that gun systems would be inadequate to deal with the missile threat, and by the mid-1950s three naval SAM systems were developed: Talos (long range), Terrier (medium range), and Tartar (short range). Talos and Terrier were nuclear-capable and this allowed their use in anti-ship or shore bombardment roles in the event of nuclear war. Chief of Naval Operations Admiral Arleigh Burke is credited with speeding the development of these systems. Terrier was initially deployed on two converted Baltimore-class cruisers (CAG), with conversions completed in 1955–56. Further conversions of six Cleveland-class cruisers (CLG) ( and classes), redesign of the as guided-missile "frigates" (DLG), and development of the DDGs resulted in the completion of numerous additional guided-missile ships deploying all three systems in 1959–1962. Also completed during this period was the nuclear-powered , with two Terrier and one Talos launchers, plus an ASROC anti-submarine launcher the World War II conversions lacked. The converted World War II cruisers up to this point retained one or two main battery turrets for shore bombardment. However, in 1962–1964 three additional Baltimore and cruisers were more extensively converted as the . These had two Talos and two Tartar launchers plus ASROC and two 5-inch (127 mm) guns for self-defense, and were primarily built to get greater numbers of Talos launchers deployed. Of all these types, only the Farragut DLGs were selected as the design basis for further production, although their successors were significantly larger (5,670 tons standard versus 4,150 tons standard) due to a second Terrier launcher and greater endurance. An economical crew size compared with World War II conversions was probably a factor, as the Leahys required a crew of only 377 versus 1,200 for the Cleveland-class conversions. Through 1980, the ten Farraguts were joined by four additional classes and two one-off ships for a total of 36 guided-missile frigates, eight of them nuclear-powered (DLGN). In 1975 the Farraguts were reclassified as guided-missile destroyers (DDG) due to their small size, and the remaining DLG/DLGN ships became guided-missile cruisers (CG/CGN). The World War II conversions were gradually retired between 1970 and 1980; the Talos missile was withdrawn in 1980 as a cost-saving measure and the Albanys were decommissioned. Long Beach had her Talos launcher removed in a refit shortly thereafter; the deck space was used for Harpoon missiles. Around this time the Terrier ships were upgraded with the RIM-67 Standard ER missile. The guided-missile frigates and cruisers served in the Cold War and the Vietnam War; off Vietnam they performed shore bombardment and shot down enemy aircraft or, as Positive Identification Radar Advisory Zone (PIRAZ) ships, guided fighters to intercept enemy aircraft. By 1995 the former guided-missile frigates were replaced by the s and s. The U.S. Navy's guided-missile cruisers were built upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification). As the U.S. Navy's strike role was centered around aircraft carriers, cruisers were primarily designed to provide air defense while often adding anti-submarine capabilities. These U.S. cruisers that were built in the 1960s and 1970s were larger, often nuclear-powered for extended endurance in escorting nuclear-powered fleet carriers, and carried longer-range surface-to-air missiles (SAMs) than early Charles F. Adams guided-missile destroyers that were tasked with the short-range air defense role. The U.S. cruiser was a major contrast to their contemporaries, Soviet "rocket cruisers" that were armed with large numbers of anti-ship cruise missiles (ASCMs) as part of the combat doctrine of saturation attack, though in the early 1980s the U.S. Navy retrofitted some of these existing cruisers to carry a small number of Harpoon anti-ship missiles and Tomahawk cruise missiles. The line between U.S. Navy cruisers and destroyers blurred with the . While originally designed for anti-submarine warfare, a Spruance destroyer was comparable in size to existing U.S. cruisers, while having the advantage of an enclosed hangar (with space for up to two medium-lift helicopters) which was a considerable improvement over the basic aviation facilities of earlier cruisers. The Spruance hull design was used as the basis for two classes; the which had comparable anti-air capabilities to cruisers at the time, and then the DDG-47-class destroyers which were redesignated as the Ticonderoga-class guided-missile cruisers to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff. In addition, 24 members of the Spruance class were upgraded with the vertical launch system (VLS) for Tomahawk cruise missiles due to its modular hull design, along with the similarly VLS-equipped Ticonderoga class, these ships had anti-surface strike capabilities beyond the 1960s–1970s cruisers that received Tomahawk armored-box launchers as part of the New Threat Upgrade. Like the Ticonderoga ships with VLS, the Arleigh Burke and , despite being classified as destroyers, actually have much heavier anti-surface armament than previous U.S. ships classified as cruisers. Following the American example, three smaller light cruisers of other NATO countries were rearmed with anti-aircraft missiles installed in place of their aft armament: the Dutch De Zeven Provinciën, the Italian Giuseppe Garibaldi, and the French Colbert. Only the French ship, rebuilt last in 1972, also received Exocet anti-ship missile launchers and domestically produced Masurca anti-aircraft missiles. The others received American Terrier missiles, with Garibaldi uniquely among surface ships also being armed with Polaris strategic missile launchers, although these were never actually carried. In the Soviet Navy, only one cruiser, Dzerzhinsky, of Project 68bis, was similarly rearmed with anti-aircraft missiles. The M-2 missiles used on it, adapted from the land-based S-75, proved ineffective as a naval system, and further conversions were abandoned. Another cruiser of this project, Admiral Nakhimov, was used for testing anti-ship missiles but never entered service in this role. The British considered converting older cruisers to guided-missile cruisers with the Seaslug system but ultimately did not proceed. Several other classical cruisers from various countries were rearmed with short-range anti-aircraft systems requiring fewer modifications, such as Seacat or Osa-M, but since these were intended only for self-defense, they are not considered guided-missile cruisers (e.g., the Soviet Zhdanov and Admiral Senyavin of Project 68U). The Peruvian light cruiser Almirante Grau (formerly the Dutch De Ruyter) was rearmed with eight Otomat anti-ship missiles at the end of the 20th century, but these did not constitute its primary armament. US Navy "cruiser gap" Prior to the introduction of the Ticonderogas, the US Navy used odd naming conventions that left its fleet seemingly without many cruisers, although a number of their ships were cruisers in all but name. From the 1950s to the 1970s, US Navy cruisers were large vessels equipped with heavy, specialized missiles (mostly surface-to-air, but for several years including the Regulus nuclear cruise missile) for wide-ranging combat against land-based and sea-based targets. Naming conventions changed, and some guided-missile cruisers were classified as frigates or destroyers during certain periods or at the construction stage. All save one—USS Long Beach—were converted from World War II cruisers of the Oregon City, Baltimore and Cleveland classes. Long Beach was also the last cruiser built with a World War II-era cruiser style hull (characterized by a long lean hull); later new-build cruisers were actually converted frigates (DLG/CG , , and the Leahy, , , and classes) or uprated destroyers (the DDG/CG Ticonderoga class was built on a Spruance-class destroyer hull). Literature sometimes considers ships as cruisers even if they are not officially classified as such, primarily larger representatives of the Soviet large anti-submarine ship class, which had no equivalent in global classification. Ultimately, after the 1975 classification reform in the US, larger ships were called cruisers, slightly smaller and weaker fleet escorts were called destroyers, and smaller ships for ocean escort and anti-submarine warfare were called frigates. However, the size and qualitative differences between them and destroyers were vague and arbitrary. With the development of destroyers, this distinction has blurred even further (for example, the American Arleigh Burke-class destroyers, complementing the Ticonderoga-class cruisers as the core of US Navy air defense, have displacements up to 9,700 tons and nearly equal combat capabilities, carrying the Aegis system and similar missiles, albeit in smaller numbers; similarly for Japanese destroyers). Frigates under this scheme were almost as large as the cruisers and optimized for anti-aircraft warfare, although they were capable anti-surface warfare combatants as well. In the late 1960s, the US government perceived a "cruiser gap"—at the time, the US Navy possessed six ships designated as cruisers, compared to 19 for the Soviet Union, even though the USN had 21 ships designated as frigates with equal or superior capabilities to the Soviet cruisers at the time. Because of this, in 1975 the Navy performed a massive redesignation of its forces: CVA/CVAN (Attack Aircraft Carrier/Nuclear-powered Attack Aircraft Carrier) were redesignated CV/CVN (although and never embarked anti-submarine squadrons). DLG/DLGN (Frigates/Nuclear-powered Frigates) of the Leahy, Belknap, and California classes along with USS Bainbridge and USS Truxtun were redesignated CG/CGN (Guided-Missile Cruiser/Nuclear-powered Guided-Missile Cruiser). Farragut-class guided-missile frigates (DLG), being smaller and less capable than the others, were redesignated to DDGs ( was the first ship of this class to be re-numbered; because of this the class is sometimes called the Coontz class); DE/DEG (Ocean Escort/Guided-Missile Ocean Escort) were redesignated to FF/FFG (Guided-Missile Frigates), bringing the US "Frigate" designation into line with the rest of the world. Also, a series of Patrol Frigates of the , originally designated PFG, were redesignated into the FFG line. The cruiser-destroyer-frigate realignment and the deletion of the Ocean Escort type brought the US Navy's ship designations into line with the rest of the world's, eliminating confusion with foreign navies. In 1980, the Navy's then-building DDG-47-class destroyers were redesignated as cruisers (Ticonderoga guided-missile cruisers) to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff. Soviet cruiser development In the Soviet Navy, cruisers formed the basis of combat groups. In the immediate post-war era it built a fleet of gun-armed light cruisers, but replaced these beginning in the early 1960s with large ships called "rocket cruisers", carrying large numbers of anti-ship cruise missiles (ASCMs) and anti-aircraft missiles. The Soviet combat doctrine of saturation attack meant that their cruisers (as well as destroyers and even missile boats) mounted multiple missiles in large container/launch tube housings and carried far more ASCMs than their NATO counterparts, while NATO combatants instead used individually smaller and lighter missiles (while appearing under-armed when compared to Soviet ships). In 1962–1965 the four s entered service; these had launchers for eight long-range SS-N-3 Shaddock ASCMs with a full set of reloads; these had a range of up to with mid-course guidance. The four more modest s, with launchers for four SS-N-3 ASCMs and no reloads, entered service in 1967–69. In 1969–79 Soviet cruiser numbers more than tripled with ten s and seven s entering service. These had launchers for eight large-diameter missiles whose purpose was initially unclear to NATO. This was the SS-N-14 Silex, an over/under rocket-delivered heavyweight torpedo primarily for the anti-submarine role, but capable of anti-surface action with a range of up to . Soviet doctrine had shifted; powerful anti-submarine vessels (these were designated "Large Anti-Submarine Ships", but were listed as cruisers in most references) were needed to destroy NATO submarines to allow Soviet ballistic missile submarines to get within range of the United States in the event of nuclear war. By this time Long Range Aviation and the Soviet submarine force could deploy numerous ASCMs. Doctrine later shifted back to overwhelming carrier group defenses with ASCMs, with the Slava and Kirov classes. After the dissolution of the Soviet Union, the Russian cruiser Moskva of Project 1164 became the flagship of the Black Sea Fleet and in 2022 participated in the invasion of Ukraine, shelling and blockading the coast, but was subsequently sunk by anti-ship missiles. Current cruisers The end of the Cold War and the subsequent reduction of military rivalry led to significant reductions in naval forces. This reduction was more pronounced in the Soviet Navy, which was mostly taken over by Russia. Faced with severe financial difficulties, Russia was forced to decommission most of its ships in the 1990s or send them for extended overhauls. The most recent Soviet/Russian rocket cruisers, the four s, were built in the 1970s and 1980s. One of the Kirov class is in refit, and 2 are being scrapped, with the in active service. Russia also operates two s and one Admiral Kuznetsov-class carrier which is officially designated as a cruiser, specifically a "heavy aviation cruiser" () due to her complement of 12 P-700 Granit supersonic AShMs. In 2022, the cruiser Moskva of Project 1164 sank after being hit by a Ukrainian missile. Currently, the Kirov-class heavy missile cruisers are used for command purposes, as Pyotr Velikiy is the flagship of the Northern Fleet. However, their air defense capabilities are still powerful, as shown by the array of point defense missiles they carry, from 44 OSA-MA missiles to 196 9K311 Tor missiles. For longer range targets, the S-300 is used. For closer range targets, AK-630 or Kashtan CIWSs are used. Aside from that, Kirovs have 20 P-700 Granit missiles for anti-ship warfare. For target acquisition beyond the radar horizon, three helicopters can be used. Besides a vast array of armament, Kirov-class cruisers are also outfitted with many sensors and communications equipment, allowing them to lead the fleet. The United States Navy has centered on the aircraft carrier since World War II. The Ticonderoga-class cruisers, built in the 1980s, were originally designed and designated as a class of destroyer, intended to provide a very powerful air-defense in these carrier-centered fleets. As of 2020, the US Navy still had 22 of its newest Ticonderoga-class cruisers in service. These ships were continuously upgraded, enhancing their value and versatility. Some were equipped with ballistic missile defense capabilities (Aegis BMD system). However, no new cruisers of this class were being built. In the 21st century, there were design efforts for futuristic large cruisers provisionally designated as CG(X), but the program was canceled in 2010 due to budget constraints. Formally, only the aforementioned ships are classified as cruisers globally. The latest American futuristic large destroyers of the Zumwalt class, despite their displacement of approximately 16,000 tons and armament with two large-caliber (155 mm) guns traditionally associated with cruisers, are classified as destroyers. Literature often emphasizes that these ships are essentially large cruisers. Similarly, Japanese large missile destroyers of the Kongō class, with a displacement of 9,485 tons and equipped with the Aegis system (derived from the Arleigh Burke-class destroyers), are sometimes referred to as cruisers. Their improved versions, the Atago and Maya classes, exceed 10,000 tons. Japan, for political reasons, does not use the term "cruiser" or even "destroyer", formally classifying these ships as missile escorts with hull numbers prefixed by DDG, corresponding to guided-missile destroyers. These Japanese destroyers also provide ballistic missile defense. Outside the US and Soviet navies, new cruisers were rare following World War II. Most navies use guided-missile destroyers for fleet air defense, and destroyers and frigates for cruise missiles. The need to operate in task forces has led most navies to change to fleets designed around ships dedicated to a single role, anti-submarine or anti-aircraft typically, and the large "generalist" ship has disappeared from most forces. The United States Navy and the Russian Navy are the only remaining navies which operate active duty ships formally classed as cruisers. Italy used until 2003 (decommissioned in 2006) and the aircraft cruiser until 2024; France operated a single helicopter cruiser until May 2010, , for training purposes only. While Type 055 of the Chinese Navy is classified as a cruiser by the U.S. Department of Defense, the Chinese consider it a guided-missile destroyer. In the years since the launch of in 1981, the class has received a number of upgrades that have dramatically improved its members' capabilities for anti-submarine and land attack (using the Tomahawk missile). Like their Soviet counterparts, the modern Ticonderogas can also be used as the basis for an entire battle group. Their cruiser designation was almost certainly deserved when first built, as their sensors and combat management systems enable them to act as flagships for a surface warship flotilla if no carrier is present, but newer ships rated as destroyers and also equipped with Aegis approach them very closely in capability, and once more blur the line between the two classes. Aircraft cruisers From time to time, some navies have experimented with aircraft-carrying cruisers. One example is the Swedish . Another was the Japanese Mogami, which was converted to carry a large floatplane group in 1942. Another variant is the helicopter cruiser. The further development of helicopter cruisers led to the creation of ships formally classified only as cruisers but significantly larger and effectively light aircraft carriers. In the Soviet Union, a series of unusual hybrid ships of Project 1143 (Kiev class) were built in the late 1970s and early 1980s. Initially classified as anti-submarine cruisers, they were ultimately designated as "heavy aircraft cruisers". These ships combined the architecture of cruisers and aircraft carriers and were armed with long-range anti-ship and anti-aircraft missiles along with a deck for vertical take-off and landing aircraft. Their full displacement of approximately 43,000 tons is typical for aircraft carriers. By hosting several helicopters, their primary mission was also anti-submarine warfare. The last example in service was the Soviet Navy's , whose last unit was converted to a pure aircraft carrier and sold to India as . The Russian Navy's is nominally designated as an aviation cruiser but otherwise resembles a standard medium aircraft carrier, albeit with a surface-to-surface missile battery. The Royal Navy's aircraft-carrying and the Italian Navy's aircraft-carrying vessels were originally designated 'through-deck cruisers', but were since designated as small aircraft carriers (although the 'C' in the pennant for Giuseppe Garibaldi indicated it retained some status as an aircraft-carrying cruiser). It was armed with missiles, but these wereshort-range self-defense missiles (anti-aircraft Aspide and anti-ship Otomat) and did not match the significance of its aviation capabilities. Similarly, the Japan Maritime Self-Defense Force's "helicopter destroyers" are really more along the lines of helicopter cruisers in function and aircraft complement, but due to the Treaty of San Francisco, must be designated as destroyers. One cruiser alternative studied in the late 1980s by the United States was variously entitled a Mission Essential Unit (MEU) or CG V/STOL. In a return to the thoughts of the independent operations cruiser-carriers of the 1930s and the Soviet Kiev class, the ship was to be fitted with a hangar, elevators, and a flight deck. The mission systems were Aegis, SQS-53 sonar, 12 SV-22 ASW aircraft and 200 VLS cells. The resulting ship would have had a waterline length of 700 feet, a waterline beam of 97 feet, and a displacement of about 25,000 tons. Other features included an integrated electric drive and advanced computer systems, both stand-alone and networked. It was part of the U.S. Navy's "Revolution at Sea" effort. The project was curtailed by the sudden end of the Cold War and its aftermath, otherwise the first of class would have been likely ordered in the early 1990s. Strike cruisers An alternative development path for guided-missile cruisers was represented by ships armed with heavy long-range anti-ship missiles, primarily developed in the Soviet Union with a focus on combating aircraft carriers. Starting in 1962, four ships of Project 58 (NATO designation: Kynda) entered service. They were armed with eight P-35 missile launchers with a range of 250 km and a twin launcher for M-1 Volna anti-aircraft missiles. With a moderate full displacement of 5,350 tons, they were initially intended to be classified as destroyers but ultimately entered service as guided-missile cruisers. During this period, designs for larger cruisers, such as Project 64 and the nuclear-powered Project 63 (with 24 anti-ship missiles), were also developed. However, their construction was abandoned due to high costs and vulnerability to air attacks due to the shortcomings of available anti-aircraft missiles. The next built type was four ships of Project 1134 (NATO designation: Kresta I) with a displacement of 7,500 tons, equipped with four P-35 anti-ship missile launchers and two Volna anti-aircraft missile launchers. These were transitional types with lesser strike capabilities and were initially classified as large anti-submarine ships but were reclassified as guided-missile cruisers in 1977. In the 1980s, before the dissolution of the Soviet Union, only three guided-missile cruisers of the new generation Project 1164 (Slava class) with a full displacement of 11,300 tons were completed out of a longer planned series. They carried 16 Bazalt anti-ship missile launchers and eight vertical launchers for long-range Fort anti-aircraft missiles. The pinnacle of development for cruisers designed to engage surface ships, while also protecting fleet formations from aircraft and submarines, was the four large nuclear-powered cruisers of Project 1144 (Kirov class) from the 1980s. These were officially classified as "heavy nuclear guided-missile cruisers". With a full displacement of up to 25,000 tons, they were armed with 20 Granit heavy anti-ship missile launchers, 12 vertical launchers for long-range Fort anti-aircraft missiles, and short-range missiles. For anti-submarine warfare, they were equipped with rocket-torpedo launchers and three helicopters, and their crew numbered up to 744 people. In English-language literature, they are sometimes referred to as "battlecruisers", although this designation lacks official justification. The ship Muntenia, with a displacement of 5,790 tons, was constructed and built in Romania in the 1980s. It was initially somewhat ambitiously designated as a light helicopter cruiser but was reclassified as a destroyer in 1990, along with a name change. The ship and its classification reflected the ambitions of dictator Nicolae Ceaușescu amid limited industrial capabilities. It carried eight Soviet P-20M medium-range anti-ship missiles but lacked anti-aircraft missile armament and was equipped with two light helicopters without means for long-range anti-submarine warfare. Operators Few cruisers are still operational in the world's navies. Those that remain in service today are: : The cruiser is kept in ceremonial commission as the flagship of the Hellenic Navy due to her historical significance. : 2 and 2 guided-missile cruisers, the heavy aviation cruiser ; the cruiser was ceremonially recommissioned as the flagship of the Russian Navy due to her historical significance. : 9 guided-missile cruisers in service. 13 more in the Reserve Fleet. The following is laid up: : The cruiser is a Slava-class cruiser that was under construction during the breakup of the Soviet Union. Ukraine inherited the ship following its independence. Progress to complete the ship has been slow and has been at 95% complete since circa 1995. It is estimated that an additional US$30 million are needed to complete the ship, and in 2019 Ukroboronprom announced that the ship would be sold. The cruiser sits docked and unfinished at the harbor of Mykolaiv in southern Ukraine. It was reported that the Ukrainian government invested ₴6.08 million into the ship's maintenance in 2012. On 26 March 2017, it was announced that the Ukrainian Government will be scrapping the vessel which has been laid up, incomplete, for nearly 30 years in Mykolaiv. Maintenance and construction was costing the country US$225,000 per month. On 19 September 2019, the new director of Ukroboronprom Aivaras Abromavičius announced that the ship will be sold. Her current status is unknown due to the 2022 Russian invasion of Ukraine. The following are classified as destroyers by their respective operators, but, due to their size and capabilities, are considered to be cruisers by some, all having full load displacements of at least 10,000 tons: : The first Type 055 destroyer was launched by China in June 2017 and was commissioned on 12 January 2020 (as of 2023, 8 are in service). Despite being classified as a destroyer by its operator, many naval analysts believe that it is far too large and too well equipped to be considered a destroyer, and is classified by the United States Defense Department as such. : 2 s, 2 s, 2 s. Despite the official classification of these ships as destroyers, these vessels are of a displacement greater than most of the world's destroyer classes. The Maya-class ships incorporate a level of armament more akin to cruisers. The Hyūga-class ships incorporate a level of armament more akin to helicopter cruisers than helicopter carriers. : 4 s. Despite their classification as a destroyer, many naval analysts feel they are in fact cruisers due to their size and armament, which are both greater than most of the world's destroyer classes. : 2 s. Even if considered a destroyer, they remain significantly larger and more capable than the only definitive cruisers in USN service, the Ticonderoga-class. Future development will add 8 more Type 055 destroyers to its fleet for a total of 16. As of 2024, 4 of these are under construction. has ordered 6 F126 frigates, displacing some 10,383 tons. As of 2024, 1 of these is under construction. It is also developing the Type F127 frigate and plans to procure 6 of these vessels. The design is expected to be a MEKO A-400 AMD, which has a displacement of 10,000 tons. announced that between 8 and 10 ships would be built under the . The destroyers will displace from 10,000 to 13,000-tons. is developing its DDX destroyer project. The 2 ships will displace 10,000 tons each, making them the largest surface combatants Italy has built since World War II. : announced that it would build 2 guided-missile warships, nominally called Aegis system equipped vessels, each displacing 20,000 tons, easily placing them into the cruiser classification. is to build an unknown number of Lider-class destroyers. At 19,000 tons of displacement they will more than double the displacement of existing s. will add 2 more Sejong the Great-class destroyers to its fleet. is developing the Type 83 destroyer, likely to be larger than the Type 45 destroyer it is set to replace and, in terms of displacement, possibly in the 10,000 tonne range. Analysis at 'DefenceConnect' of a BAE concept for the class stated that their proposal displaced at least 11,810 tons. currently has 1 Zumwalt-class destroyer undergoing sea trials and is developing its DDG(X) project to replace the aging Ticonderoga-class cruisers. Displacing 12,000 tons, much greater than their predecessors, the DDG(X) ships will be cruisers in all but name. Museum ships As of 2019, several decommissioned cruisers have been saved from scrapping and exist worldwide as museum ships. They are: A floating replica of the is on display in Dandong, China. in Athens, Greece; still active as the flagship of the Hellenic Navy in St. Petersburg, Russia; still active as the flagship of the Russian Navy in Novorossiysk, Russia; the last surviving in London, England in Belfast, Northern Ireland; the last surviving ship from the Battle of Jutland in Philadelphia, Pennsylvania; the world's oldest steel-hulled warship afloat. in Buffalo, New York in Quincy, Massachusetts; the world's last heavy cruiser. Bow section of in La Spezia, Italy Former museums The was on display in Bordeaux, France until 2006, when she was forced to close due to financial difficulties. She sat in the French Navy's mothball fleet in Landevennec until she was sold for scrap in 2014. Former operators last cruiser, the was sunk in action during the Falklands War in 1982. lost its entire navy following the Empire's collapse following World War I. decommissioned both its surviving s in 1949. returned its only cruiser, to France following the abolition of its navy in 1920. decommissioned its last , Almirante Tamandaré in 1976. decommissioned HMCS Quebec in 1961. decommissioned its last , O'Higgins in 1991. 's last cruiser, , was decommissioned in 1958 and sold for scrapping in 1959. This light cruiser was akin to pre-WW1 light cruisers at time of commissioning and its contemporaries were gunboats; Taiwan's penultimate cruiser was ROCS Chung King, their lone vessel in the Arethusa-class. She defected to the People's Liberation Army Navy during the Chinese Civil War in 1949. only cruiser, Znaim was handed over to Germany in 1943. decommissioned its last cruiser, in 1923. decommissioned its last cruiser, in 2010. decommissioned its last cruiser, in 1990. decommissioned its last active duty cruiser, Elli in 1965. Haitian Navy only cruiser, Consul Gostrück sank due to the inexperience of her crew in 1910. decommissioned its , in 1985. decommissioned its only cruiser, the RI Irian in 1972. decommissioned helicopter cruiser Vittorio Veneto in 2006 and aircraft cruiser in 2024. surrendered all its remaining cruisers to the Allies following World War II. decommissioned its last cruiser, HMNZS Royalist in 1966. decommissioned its last cruiser, in 1975. decommissioned its only cruiser, between 1982 and 1985. The ship was scrapped in 1985. decommissioned its last , in 2017. returned its lone surviving , ORP Conrad, to the United Kingdom in 1946. decommissioned its last cruiser, NRP Vasco da Gama in 1935. decommissioned its only cruiser, in 1929. From 1985 to 1990, the Romanian People's Navy (and subsequently, the Romanian Naval Forces) classified Muntenia as a light helicopter cruiser, but it was refitted and redesignated as a destroyer (ultimately, it was redesignated as a frigate in 2001). decommissioned its only cruiser SATS General Botha in 1947. decommissioned its last cruiser, in 1977. decommissioned its last cruiser, in 1971. decommissioned its last cruiser, TCG Mecidiye in 1948; they retained a battlecruiser, TCG Yavuz, which was decommissioned in 1950 and stricken from the Naval Register in 1954. decommissioned its last cruiser in 1979. lost its entire fleet upon its reintegration into the Soviet Union in 1921. decommissioned its only cruiser, ROU Montevideo in 1932. decommissioned its only cruiser, FNV Mariscal Sucre in 1940. only cruiser KB Dalmacija was captured by Germany during the Invasion of Yugoslavia in 1941.
Technology
Naval warfare
null
7037
https://en.wikipedia.org/wiki/Chlamydia
Chlamydia
Chlamydia, or more specifically a chlamydia infection, is a sexually transmitted infection caused by the bacterium Chlamydia trachomatis. Most people who are infected have no symptoms. When symptoms do appear, they may occur only several weeks after infection; the incubation period between exposure and being able to infect others is thought to be on the order of two to six weeks. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy. Chlamydia infections can occur in other areas besides the genitals, including the anus, eyes, throat, and lymph nodes. Repeated chlamydia infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world. Chlamydia can be spread during vaginal, anal, oral, or manual sex and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. Infection by the bacterium Chlamydia trachomatis only occurs in humans. Diagnosis is often by screening, which is recommended yearly in sexually active women under the age of 25, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas. Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Chlamydia can be cured by antibiotics, with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated, and infected people should be advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment, people should be tested again after three months. Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015, about 61 million new cases occurred globally. In the United States, about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015, infections resulted in about 200 deaths. The word chlamydia is from the Greek , meaning 'cloak'. Signs and symptoms Genital disease Women Chlamydial infection of the cervix (neck of the womb) is a sexually transmitted infection which has no symptoms for around 70% of women infected. The infection can be passed through vaginal, anal, oral, or manual sex. Of those who have an asymptomatic infection that is not detected by their doctor, approximately half will develop pelvic inflammatory disease (PID), a generic term for infection of the uterus, fallopian tubes, and/or ovaries. PID can cause scarring inside the reproductive organs, which can later cause serious complications, including chronic pelvic pain, difficulty becoming pregnant, ectopic (tubal) pregnancy, and other dangerous complications of pregnancy. Chlamydia is known as the "silent epidemic", as at least 70% of genital C. trachomatis infections in women (and 50% in men) are asymptomatic at the time of diagnosis, and can linger for months or years before being discovered. Signs and symptoms may include abnormal vaginal bleeding or discharge, abdominal pain, painful sexual intercourse, fever, painful urination or the urge to urinate more often than usual (urinary urgency). For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. Guidelines recommend all women attending for emergency contraceptive are offered chlamydia testing, with studies showing up to 9% of women aged under 25 years had chlamydia. Men In men, those with a chlamydial infection show symptoms of infectious inflammation of the urethra in about 50% of cases. Symptoms that may occur include: a painful or burning sensation when urinating, an unusual discharge from the penis, testicular pain or swelling, or fever. If left untreated, chlamydia in men can spread to the testicles causing epididymitis, which in rare cases can lead to sterility if not treated. Chlamydia is also a potential cause of prostatic inflammation in men, although the exact relevance in prostatitis is difficult to ascertain due to possible contamination from urethritis. Eye disease Trachoma is a chronic conjunctivitis caused by Chlamydia trachomatis. It was once the leading cause of blindness worldwide, but its role diminished from 15% of blindness cases by trachoma in 1995 to 3.6% in 2002. The infection can be spread from eye to eye by fingers, shared towels or cloths, coughing and sneezing and eye-seeking flies. Symptoms include mucopurulent ocular discharge, irritation, redness, and lid swelling. Newborns can also develop chlamydia eye infection through childbirth (see below). Using the SAFE strategy (acronym for surgery for in-growing or in-turned lashes, antibiotics, facial cleanliness, and environmental improvements), the World Health Organization aimed (unsuccessfully) for the global elimination of trachoma by 2020 (GET 2020 initiative). The updated World Health Assembly neglected tropical diseases road map (2021–2030) sets 2030 as the new timeline for global elimination. Joints Chlamydia may also cause reactive arthritis—the triad of arthritis, conjunctivitis and urethral inflammation—especially in young men. About 15,000 men develop reactive arthritis due to chlamydia infection each year in the U.S., and about 5,000 are permanently affected by it. It can occur in both sexes, though is more common in men. Infants As many as half of all infants born to mothers with chlamydia will be born with the disease. Chlamydia can affect infants by causing spontaneous abortion; premature birth; conjunctivitis, which may lead to blindness; and pneumonia. Conjunctivitis due to chlamydia typically occurs one week after birth (compared with chemical causes (within hours) or gonorrhea (2–5 days)). Other conditions A different serovar of Chlamydia trachomatis is also the cause of lymphogranuloma venereum, an infection of the lymph nodes and lymphatics. It usually presents with genital ulceration and swollen lymph nodes in the groin, but it may also manifest as rectal inflammation, fever or swollen lymph nodes in other regions of the body. Transmission Chlamydia can be transmitted during vaginal, anal, oral, or manual sex or direct contact with infected tissue such as conjunctiva. Chlamydia can also be passed from an infected mother to her baby during vaginal childbirth. It is assumed that the probability of becoming infected is proportionate to the number of bacteria one is exposed to. Pathophysiology Chlamydia bacteria have the ability to establish long-term associations with host cells. When an infected host cell is starved for various nutrients such as amino acids (for example, tryptophan), iron, or vitamins, this has a negative consequence for chlamydia bacteria since the organism is dependent on the host cell for these nutrients. Long-term cohort studies indicate that approximately 50% of those infected clear within a year, 80% within two years, and 90% within three years. The starved chlamydia bacteria can enter a persistent growth state where they stop cell division and become morphologically aberrant by increasing in size. Persistent organisms remain viable as they are capable of returning to a normal growth state once conditions in the host cell improve. There is debate as to whether persistence has relevance: some believe that persistent chlamydia bacteria are the cause of chronic chlamydial diseases. Some antibiotics such as β-lactams have been found to induce a persistent-like growth state. Diagnosis The diagnosis of genital chlamydial infections evolved rapidly from the 1990s through 2006. Nucleic acid amplification tests (NAAT), such as polymerase chain reaction (PCR), transcription mediated amplification (TMA), and the DNA strand displacement amplification (SDA) now are the mainstays. NAAT for chlamydia may be performed on swab specimens sampled from the cervix (women) or urethra (men), on self-collected vaginal swabs, or on voided urine. NAAT has been estimated to have a sensitivity of approximately 90% and a specificity of approximately 99%, regardless of sampling from a cervical swab or by urine specimen. In women seeking treatment in a sexually transmitted infection clinic where a urine test is negative, a subsequent cervical swab has been estimated to be positive in approximately 2% of the time. At present, the NAATs have regulatory approval only for testing urogenital specimens, although rapidly evolving research indicates that they may give reliable results on rectal specimens. Because of improved test accuracy, ease of specimen management, convenience in specimen management, and ease of screening sexually active men and women, the NAATs have largely replaced culture, the historic gold standard for chlamydia diagnosis, and the non-amplified probe tests. The latter test is relatively insensitive, successfully detecting only 60–80% of infections in asymptomatic women, and often giving falsely-positive results. Culture remains useful in selected circumstances and is currently the only assay approved for testing non-genital specimens. Other methods also exist including: ligase chain reaction (LCR), direct fluorescent antibody resting, enzyme immunoassay, and cell culture. The swab sample for chlamydial infections does not show difference whether the sample was collected in home or in clinic in terms of numbers of patient treated. The implications in cured patients, reinfection, partner management, and safety are unknown. Rapid point-of-care tests are, as of 2020, not thought to be effective for diagnosing chlamydia in men of reproductive age and non-pregnant women because of high false-negative rates. Prevention Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Screening For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. For pregnant women, guidelines vary: screening women with age or other risk factors is recommended by the U.S. Preventive Services Task Force (USPSTF) (which recommends screening women under 25) and the American Academy of Family Physicians (which recommends screening women aged 25 or younger). The American College of Obstetricians and Gynecologists recommends screening all at risk, while the Centers for Disease Control and Prevention recommend universal screening of pregnant women. The USPSTF acknowledges that in some communities there may be other risk factors for infection, such as ethnicity. Evidence-based recommendations for screening initiation, intervals and termination are currently not possible. For men, the USPSTF concludes evidence is currently insufficient to determine if regular screening of men for chlamydia is beneficial. They recommend regular screening of men who are at increased risk for HIV or syphilis infection. A Cochrane review found that the effects of screening are uncertain in terms of chlamydia transmission but that screening probably reduces the risk of pelvic inflammatory disease in women. In the United Kingdom the National Health Service (NHS) aims to: Prevent and control chlamydia infection through early detection and treatment of asymptomatic infection; Reduce onward transmission to sexual partners; Prevent the consequences of untreated infection; Test at least 25 percent of the sexually active under 25 population annually. Retest after treatment. Treatment C. trachomatis infection can be effectively cured with antibiotics. Guidelines recommend azithromycin, doxycycline, erythromycin, levofloxacin or ofloxacin. In men, doxycycline (100 mg twice a day for 7 days) is probably more effective than azithromycin (1 g single dose) but evidence for the relative effectiveness of antibiotics in women is very uncertain. Agents recommended during pregnancy include erythromycin or amoxicillin. An option for treating sexual partners of those with chlamydia or gonorrhea includes patient-delivered partner therapy (PDT or PDPT), which is the practice of treating the sex partners of index cases by providing prescriptions or medications to the patient to take to his/her partner without the health care provider first examining the partner. Following treatment people should be tested again after three months to check for reinfection. Test of cure may be false-positive due to the limitations of NAAT in a bacterial (rather than a viral) context, since targeted genetic material may persist in the absence of viable organisms. Epidemiology Globally, as of 2015, sexually transmitted chlamydia affects approximately 61 million people. It is more common in women (3.8%) than men (2.5%). In 2015 it resulted in about 200 deaths. In the United States about 1.6 million cases were reported in 2016. The CDC estimates that if one includes unreported cases there are about 2.9 million each year. It affects around 2% of young people. Chlamydial infection is the most common bacterial sexually transmitted infection in the UK. Chlamydia causes more than 250,000 cases of epididymitis in the U.S. each year. Chlamydia causes 250,000 to 500,000 cases of PID every year in the United States. Women infected with chlamydia are up to five times more likely to become infected with HIV, if exposed.
Biology and health sciences
Infectious disease
null
7038
https://en.wikipedia.org/wiki/Candidiasis
Candidiasis
Candidiasis is a fungal infection due to any species of the genus Candida (a yeast). When it affects the mouth, in some countries it is commonly called thrush. Signs and symptoms include white patches on the tongue or other areas of the mouth and throat. Other symptoms may include soreness and problems swallowing. When it affects the vagina, it may be referred to as a yeast infection or thrush. Signs and symptoms include genital itching, burning, and sometimes a white "cottage cheese-like" discharge from the vagina. Yeast infections of the penis are less common and typically present with an itchy rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers, among other symptoms. More than 20 types of Candida may cause infection with Candida albicans being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risk factors include during breastfeeding, following antibiotic therapy, and the wearing of dentures. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic therapy. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system. Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventively, and concomitantly with medications known to precipitate infections. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors. Signs and symptoms Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the sex organs (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection. In immunocompromised individuals, Candida infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting. Mouth Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and in the throat. Irritation may also occur, causing discomfort when swallowing. Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks. Genitals Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex. Skin Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin. Invasive infection Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmitted infection. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases. Neurological symptoms Systemic candidiasis can affect the central nervous system causing a variety of neurological symptoms, with a presentation similar to meningitis. Causes Candida yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body. Candida requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter the bloodstream and cause systemic Candida infections. Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic therapy, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora, thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by Candida species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases. C. albicans was isolated from the vaginas of 19% of apparently healthy women, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of Candida cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections. In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon. Breast-feeding mothers may also develop candidiasis on and around the nipple as a result of moisture created by excessive milk-production. Vaginal candidiasis can cause congenital candidiasis in newborns. Diagnosis In oral candidiasis, simply inspecting the person's mouth for white patches and irritation may make the diagnosis. A sample of the infected area may also be taken to determine what organism is causing the infection. Symptoms of vaginal candidiasis are also present in the more common bacterial vaginosis; aerobic vaginitis is distinct and should be excluded in the differential diagnosis. In a 2002 study, only 33% of women who were self-treating for a yeast infection were found to have such an infection, while most had either bacterial vaginosis or a mixed-type infection. Diagnosis of a yeast infection is confirmed either via microscopic examination or culturing. For identification by light microscopy, a scraping or swab of the affected area is placed on a microscope slide. A single drop of 10% potassium hydroxide (KOH) solution is then added to the specimen. The KOH dissolves the skin cells, but leaves the Candida cells intact, permitting visualization of pseudohyphae and budding yeast cells typical of many Candida species. For the culturing method, a sterile swab is rubbed on the infected skin surface. The swab is then streaked on a culture medium. The culture is incubated at 37 °C (98.6 °F) for several days, to allow development of yeast or bacterial colonies. The characteristics (such as morphology and colour) of the colonies may allow initial diagnosis of the organism causing disease symptoms. Respiratory, gastrointestinal, and esophageal candidiasis require an endoscopy to diagnose. For gastrointestinal candidiasis, it is necessary to obtain a 3–5 milliliter sample of fluid from the duodenum for fungal culture. The diagnosis of gastrointestinal candidiasis is based upon the culture containing in excess of 1,000 colony-forming units per milliliter. Classification Candidiasis may be divided into these types: Mucosal candidiasis Oral candidiasis (thrush, oropharyngeal candidiasis) Pseudomembranous candidiasis Erythematous candidiasis Hyperplastic candidiasis Denture-related stomatitis — Candida organisms are involved in about 90% of cases Angular cheilitis — Candida species are responsible for about 20% of cases, mixed infection of C. albicans and Staphylococcus aureus for about 60% of cases. Median rhomboid glossitis Candidal vulvovaginitis (vaginal yeast infection) Candidal balanitis — infection of the glans penis, almost exclusively occurring in uncircumcised males Esophageal candidiasis (candidal esophagitis) Gastrointestinal candidiasis Respiratory candidiasis Cutaneous candidiasis Candidal folliculitis Candidal intertrigo Candidal paronychia Perianal candidiasis, may present as pruritus ani Candidid Chronic mucocutaneous candidiasis Congenital cutaneous candidiasis Diaper candidiasis: an infection of a child's diaper area Erosio interdigitalis blastomycetica Candidal onychomycosis (nail infection) caused by Candida Systemic candidiasis Candidemia, a form of fungemia which may lead to sepsis Invasive candidiasis (disseminated candidiasis) — organ infection by Candida Chronic systemic candidiasis (hepatosplenic candidiasis) — sometimes arises during recovery from neutropenia Antibiotic candidiasis (iatrogenic candidiasis) Prevention A diet that supports the immune system and is not high in simple carbohydrates contributes to a healthy balance of the oral and intestinal flora. While yeast infections are associated with diabetes, the level of blood sugar control may not affect the risk. Wearing cotton underwear may help to reduce the risk of developing skin and vaginal yeast infections, along with not wearing wet clothes for long periods of time. For women who experience recurrent yeast infections, there is limited evidence that oral or intravaginal probiotics help to prevent future infections. This includes either as pills or as yogurt. Oral hygiene can help prevent oral candidiasis when people have a weakened immune system. For people undergoing cancer treatment, chlorhexidine mouthwash can prevent or reduce thrush. People who use inhaled corticosteroids can reduce the risk of developing oral candidiasis by rinsing the mouth with water or mouthwash after using the inhaler. People with dentures should also disinfect their dentures regularly to prevent oral candidiasis. Treatment Candidiasis is treated with antifungal medications; these include clotrimazole, nystatin, fluconazole, voriconazole, amphotericin B, and echinocandins. Intravenous fluconazole or an intravenous echinocandin such as caspofungin are commonly used to treat immunocompromised or critically ill individuals. The 2016 revision of the clinical practice guideline for the management of candidiasis lists a large number of specific treatment regimens for Candida infections that involve different Candida species, forms of antifungal drug resistance, immune statuses, and infection localization and severity. Gastrointestinal candidiasis in immunocompetent individuals is treated with 100–200 mg fluconazole per day for 2–3 weeks. Localized infection Mouth and throat candidiasis are treated with antifungal medication. Oral candidiasis usually responds to topical treatments; otherwise, systemic antifungal medication may be needed for oral infections. Candidal skin infections in the skin folds (candidal intertrigo) typically respond well to topical antifungal treatments (e.g., nystatin or miconazole). For breastfeeding mothers topical miconazole is the most effective treatment for treating candidiasis on the breasts. Gentian violet can be used for thrush in breastfeeding babies. Systemic treatment with antifungals by mouth is reserved for severe cases or if treatment with topical therapy is unsuccessful. Candida esophagitis may be treated orally or intravenously; for severe or azole-resistant esophageal candidiasis, treatment with amphotericin B may be necessary. Vaginal yeast infections are typically treated with topical antifungal agents. Penile yeast infections are also treated with antifungal agents, but while an internal treatment may be used (such as a pessary) for vaginal yeast infections, only external treatments – such as a cream – can be recommended for penile treatment. A one-time dose of fluconazole by mouth is 90% effective in treating a vaginal yeast infection. For severe nonrecurring cases, several doses of fluconazole is recommended. Local treatment may include vaginal suppositories or medicated douches. Other types of yeast infections require different dosing. C. albicans can develop resistance to fluconazole, this being more of an issue in those with HIV/AIDS who are often treated with multiple courses of fluconazole for recurrent oral infections. For vaginal yeast infection in pregnancy, topical imidazole or triazole antifungals are considered the therapy of choice owing to available safety data. Systemic absorption of these topical formulations is minimal, posing little risk of transplacental transfer. In vaginal yeast infection in pregnancy, treatment with topical azole antifungals is recommended for seven days instead of a shorter duration. For vaginal yeast infections, many complementary treatments are proposed, however a number have side effects. No benefit from probiotics has been found for active infections. Blood-borne infection Candidemia occurs when any Candida species infects the blood. Its treatment typically consists of oral or intravenous antifungal medications. Examples include intravenous fluconazole or an echinocandin such as caspofungin may be used. Amphotericin B is another option. Prognosis In hospitalized patients who develop candidemia, age is an important prognostic factor. Mortality following candidemia is 50% in patients aged ≥75 years and 24% in patients aged <75 years. Among individuals being treated in intensive care units, the mortality rate is about 30–50% when systemic candidiasis develops. Epidemiology Oral candidiasis is the most common fungal infection of the mouth, and it also represents the most common opportunistic oral infection in humans. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. It is estimated that 20% of women may be asymptomatically colonized by vaginal yeast. In the United States there are approximately 1.4 million doctor office visits every year for candidiasis. About three-quarters of women have at least one yeast infection at some time during their lives. Esophageal candidiasis is the most common esophageal infection in persons with AIDS and accounts for about 50% of all esophageal infections, often coexisting with other esophageal diseases. About two-thirds of people with AIDS and esophageal candidiasis also have oral candidiasis. Candidal sepsis is rare. Candida is the fourth most common cause of bloodstream infections among hospital patients in the United States. The incidence of bloodstream candida in intensive care units varies widely between countries. History Descriptions of what sounds like oral thrush go back to the time of Hippocrates circa 460–370 BCE. The first description of a fungus as the causative agent of an oropharyngeal and oesophageal candidosis was by Bernhard von Langenbeck in 1839. Vulvovaginal candidiasis was first described in 1849 by Wilkinson. In 1875, Haussmann demonstrated the causative organism in both vulvovaginal and oral candidiasis is the same. With the advent of antibiotics following World War II, the rates of candidiasis increased. The rates then decreased in the 1950s following the development of nystatin. The colloquial term "thrush" is of unknown origin but may stem from an unrecorded Old English word *þrusc or from a Scandinavian root. The term is not related to the bird of the same name. The term candidosis is largely used in British English, and candidiasis in American English. Candida is also pronounced differently; in American English, the stress is on the "i", whereas in British English the stress is on the first syllable. The genus Candida and species C. albicans were described by botanist Christine Marie Berkhout in her doctoral thesis at the University of Utrecht in 1923. Over the years, the classification of the genera and species has evolved. Obsolete names for this genus include Mycotorula and Torulopsis. The species has also been known in the past as Monilia albicans and Oidium albicans. The current classification is nomen conservandum, which means the name is authorized for use by the International Botanical Congress (IBC). The genus Candida includes about 150 different species. However, only a few are known to cause human infections. C. albicans is the most significant pathogenic species. Other species pathogenic in humans include C. auris, C. tropicalis, C. parapsilosis, C. dubliniensis, and C. lusitaniae. The name Candida was proposed by Berkhout. It is from the Latin word toga candida, referring to the white toga (robe) worn by candidates for the Senate of the ancient Roman republic. The specific epithet albicans also comes from Latin, albicare meaning "to whiten". These names refer to the generally white appearance of Candida species when cultured. Alternative medicine A 2005 publication noted that "a large pseudoscientific cult" has developed around the topic of Candida, with claims stating that up to one in three people are affected by yeast-related illness, particularly a condition called "Candidiasis hypersensitivity". Some practitioners of alternative medicine have promoted these purported conditions and sold dietary supplements as supposed cures; a number of them have been prosecuted. In 1990, alternative health vendor Nature's Way signed an FTC consent agreement not to misrepresent in advertising any self-diagnostic test concerning yeast conditions or to make any unsubstantiated representation concerning any food or supplement's ability to control yeast conditions, with a fine of $30,000 payable to the National Institutes of Health for research in genuine candidiasis. Research High level Candida colonization is linked to several diseases of the gastrointestinal tract including Crohn's disease. There has been an increase in resistance to antifungals worldwide over the past 30–40 years.
Biology and health sciences
Fungal infections
Health
7039
https://en.wikipedia.org/wiki/Control%20theory
Control theory
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics. Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research. History Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant. Open-loop and closed-loop (feedback) control Classical control theory Linear and nonlinear control theory The field of control theory can be divided into two branches: Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest. Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used. Analysis techniques - frequency domain and time domain Mathematical techniques for analyzing and designing control systems fall into two different categories: Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above. Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine. In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space. System interfacing - SISO & MIMO Control systems can be divided into different categories depending on the number of inputs and outputs. Single-input single-output (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an audio system, in which the control input is the input audio signal and the output is the sound waves from the speaker. Multiple-input multiple-output (MIMO) – These are found in more complicated systems. For example, modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator. The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere. Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems. Classical SISO system design The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern MIMO system design Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. Topics in control theory Stability The stability of a general dynamical system with no input can be described with Lyapunov stability criteria. A linear system is called bounded-input bounded-output (BIBO) stable if its output will stay bounded for any bounded input. Stability for nonlinear systems that take an input is input-to-state stability (ISS), which combines Lyapunov stability and a notion similar to BIBO stability. For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems. Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside in the open left half of the complex plane for continuous time, when the Laplace transform is used to obtain the transfer function. inside the unit circle for discrete time, when the Z-transform is used. The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the axis is the real axis and the discrete Z-transform is in circular coordinates where the axis is the real axis. When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has an impulse response of then the Z-transform (see this example), is given by which has a pole in (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle. However, if the impulse response was then the Z-transform is which has a pole at and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. Controllability and observability Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. Control specification Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have , where is a fixed value strictly greater than zero, instead of simply asking that . Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). Model identification and robustness A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible. System identification The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that . Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. Analysis Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. Constraints A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. System classifications Linear systems control For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. Nonlinear systems control Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states. Decentralized systems control When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. Deterministic and stochastic systems control A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. Main control strategies Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. List of the main control techniques Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control. Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications. Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors. Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations. Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field. A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system. Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system. Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy. People in systems and control Many active and historical figures made significant contribution to control theory including Pierre-Simon Laplace invented the Z-transform in his work on probability theory, now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the Laplace transform which is named after him. Irmgard Flugge-Lotz developed the theory of discontinuous automatic control and applied it to automatic aircraft control systems. Alexander Lyapunov in the 1890s marks the beginning of stability theory. Harold S. Black invented the concept of negative feedback amplifiers in 1927. He managed to develop stable negative feedback amplifiers in the 1930s. Harry Nyquist developed the Nyquist stability criterion for feedback systems in the 1930s. Richard Bellman developed dynamic programming in the 1940s. Warren E. Dixon, control theorist and a professor Kyriakos G. Vamvoudakis, developed synchronous reinforcement learning algorithms to solve optimal control and game theoretic problems Andrey Kolmogorov co-developed the Wiener–Kolmogorov filter in 1941. Norbert Wiener co-developed the Wiener–Kolmogorov filter and coined the term cybernetics in the 1940s. John R. Ragazzini introduced digital control and the use of Z-transform in control theory (invented by Laplace) in the 1950s. Lev Pontryagin introduced the maximum principle and the bang-bang principle. Pierre-Louis Lions developed viscosity solutions into stochastic control and optimal control methods. Rudolf E. Kálmán pioneered the state-space approach to systems and control. Introduced the notions of controllability and observability. Developed the Kalman filter for linear estimation. Ali H. Nayfeh who was one of the main contributors to nonlinear control theory and published many books on perturbation methods Jan C. Willems Introduced the concept of dissipativity, as a generalization of Lyapunov function to input/state/output systems. The construction of the storage function, as the analogue of a Lyapunov function is called, led to the study of the linear matrix inequality (LMI) in control theory. He pioneered the behavioral approach to mathematical systems theory.
Mathematics
Other
null
7043
https://en.wikipedia.org/wiki/Chemical%20formula
Chemical formula
A chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name since it does not contain any words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae. The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is (twice as many hydrogen atoms as carbon and oxygen), while its molecular formula is (12 hydrogen atoms, six carbon and oxygen atoms). Sometimes a chemical formula is complicated by being written as a condensed formula (or condensed molecular formula, occasionally called a "semi-structural formula"), which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is or . However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents. Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula. Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge. Overview A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of is a variable non-whole number ratio with n ranging from over 4 to more than 6.5. When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is rather than the glucose empirical formula, which is . However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous. For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula , and dimethyl ether by the condensed formula . These two molecules have the same empirical and molecular formulae (), but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds. Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula , and molecular formula , but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula , which illustrates that this compound consists of ions and ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species. Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs. Types Empirical formula In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as , and for macromolecules, such as . An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element. For example, hexane has a molecular formula of , and (for one of its isomers, n-hexane) a structural formula , implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is . Likewise the empirical formula for hydrogen peroxide, , is simply , expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, . This is also the molecular formula for formaldehyde, but acetic acid has double the number of atoms. Like the other formula types detailed below, an empirical formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements. Molecular formula Molecular formulae simply indicate the numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is (ratio 1:2:1), while its molecular formula is (number of atoms 6:12:6). For water, both formulae are . A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish. Structural formula In addition to indicating the number of atoms of each elementa molecule, a structural formula indicates how the atoms are organized, and shows (or implies) the chemical bonds between the atoms. There are multiple types of structural formulas focused on different aspects of the molecular structure. The two diagrams show two molecules which are structural isomers of each other, since they both have the same molecular formula , but they have different structural formulas as shown. Condensed formula The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule. A condensed (or semi-structural) formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as . In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: , and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write or less commonly . The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them. A triple bond may be expressed with three lines () or three pairs of dots (), and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond. Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written . This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula implies a central carbon atom connected to one hydrogen atom and three methyl groups (). The same number of atoms of each element (10 hydrogens and 4 carbons, or ) may be used to make a straight chain molecule, n-butane: . Chemical names in answer to limitations of chemical formulae The alkene called but-2-ene has two isomers, which the chemical formula does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond (cis or Z) or on the opposite sides from each other (trans or E). As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems. Polymers in condensed formulae For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as , is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter n may be used to indicate this formula: . Ions in condensed formulae For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, , or . The total charge on a charged molecule or a polyatomic ion may also be shown in this way, such as for hydronium, , or sulfate, . Here + and − are used in place of +1 and −1, respectively. For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in , which is found in compounds such as caesium dodecaborate, . Parentheses ( ) can be nested inside brackets to indicate a repeating unit, as in Hexamminecobalt(III) chloride, . Here, indicates that the ion contains six ammine groups () bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3. This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as or . Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms togetherthey are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent. Isotopes Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is . Also a study involving stable isotope ratios might include the molecule . A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, for dioxygen, and for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly. Trapped atoms The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene () with an atom (M) would simply be represented as regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted if M was inside the carbon network. A non-fullerene example is , an ion in which one arsenic (As) atom is trapped in a cage formed by the other 32 atoms. This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, or . The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene. Non-stoichiometric chemical formulae Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in , or it might include a variable part represented by a letter, as in , where x is normally much less than 1. General forms for organic compounds A chemical formula used for a series of compounds that differ from each other by a constant unit is called a general formula. It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula (n ≥ 1), giving the homologs methanol, ethanol, propanol for 1 ≤ n ≤ 3. Hill system The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically. By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order. The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds. A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so "B" comes before "Be", which comes before "Br"). The following example formulae are written using the Hill system, and listed in Hill order: BrClH2Si BrI CCl4 CH3I C2H5Br H2O4S
Physical sciences
Chemical reactions
null
7044
https://en.wikipedia.org/wiki/Beetle
Beetle
Beetles are insects that form the order Coleoptera (), in the superorder Holometabola. Their front pair of wings are hardened into wing-cases, elytra, distinguishing them from most other insects. The Coleoptera, with about 400,000 described species, is the largest of all orders, constituting almost 40% of described insects and 25% of all known animal species; new species are discovered frequently, with estimates suggesting that there are between 0.9 and 2.1 million total species. However, the number of beetle species is challenged by the number of species in dipterans (flies) and hymenopterans (wasps). Found in almost every habitat except the sea and the polar regions, they interact with their ecosystems in several ways: beetles often feed on plants and fungi, break down animal and plant debris, and eat other invertebrates. Some species are serious agricultural pests, such as the Colorado potato beetle, while others such as Coccinellidae (ladybirds or ladybugs) eat aphids, scale insects, thrips, and other plant-sucking insects that damage crops. Some others also have unusual characteristics, such as fireflies, which use a light-emitting organ for mating and communication purposes. Beetles typically have a particularly hard exoskeleton including the elytra, though some such as the rove beetles have very short elytra while blister beetles have softer elytra. The general anatomy of a beetle is quite uniform and typical of insects, although there are several examples of novelty, such as adaptations in water beetles which trap air bubbles under the elytra for use while diving. Beetles are holometabolans, which means that they undergo complete metamorphosis, with a series of conspicuous and relatively abrupt changes in body structure between hatching and becoming adult after a relatively immobile pupal stage. Some, such as stag beetles, have a marked sexual dimorphism, the males possessing enormously enlarged mandibles which they use to fight other males. Many beetles are aposematic, with bright colors and patterns warning of their toxicity, while others are harmless Batesian mimics of such insects. Many beetles, including those that live in sandy places, have effective camouflage. Beetles are prominent in human culture, from the sacred scarabs of ancient Egypt to beetlewing art and use as pets or fighting insects for entertainment and gambling. Many beetle groups are brightly and attractively colored making them objects of collection and decorative displays. Over 300 species are used as food, mostly as larvae; species widely consumed include mealworms and rhinoceros beetle larvae. However, the major impact of beetles on human life is as agricultural, forestry, and horticultural pests. Serious pest species include the boll weevil of cotton, the Colorado potato beetle, the coconut hispine beetle, the mountain pine beetle, and many others. Most beetles, however, do not cause economic damage and some, such as numerous species of lady beetles, are beneficial by helping to control insect pests. The scientific study of beetles is known as coleopterology. Etymology The name of the taxonomic order, Coleoptera, comes from the Greek koleopteros (κολεόπτερος), given to the group by Aristotle for their elytra, hardened shield-like forewings, from koleos, sheath, and pteron, wing. The English name beetle comes from the Old English word bitela, little biter, related to bītan (to bite), leading to Middle English betylle. Another Old English name for beetle is ċeafor, chafer, used in names such as cockchafer, from the Proto-Germanic *kebrô ("beetle"; compare German Käfer, Dutch kever, Afrikaans kewer). Distribution and diversity Beetles are by far the largest order of insects: the roughly 400,000 species make up about 40% of all insect species so far described, and about 25% of all animal species. A 2015 study provided four independent estimates of the total number of beetle species, giving a mean estimate of some 1.5 million with a "surprisingly narrow range" spanning all four estimates from a minimum of 0.9 to a maximum of 2.1 million beetle species. The four estimates made use of host-specificity relationships (1.5 to 1.9 million), ratios with other taxa (0.9 to 1.2 million), plant:beetle ratios (1.2 to 1.3), and extrapolations based on body size by year of description (1.7 to 2.1 million). This immense diversity led the evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Christian God from the works of His Creation, "An inordinate fondness for beetles". However, the ranking of beetles as most diverse has been challenged. Multiple studies posit that Diptera (flies) and/or Hymenoptera (sawflies, wasps, ants and bees) may have more species. Beetles are found in nearly all habitats, including freshwater and coastal habitats, wherever vegetative foliage is found, from trees and their bark to flowers, leaves, and underground near roots - even inside plants in galls, in every plant tissue, including dead or decaying ones. Tropical forest canopies have a large and diverse fauna of beetles, including Carabidae, Chrysomelidae, and Scarabaeidae. The heaviest beetle, indeed the heaviest insect stage, is the larva of the goliath beetle, Goliathus goliatus, which can attain a mass of at least and a length of . Adult male goliath beetles are the heaviest beetle in its adult stage, weighing and measuring up to . Adult elephant beetles, Megasoma elephas and Megasoma actaeon often reach and . The longest beetle is the Hercules beetle Dynastes hercules, with a maximum overall length of at least 16.7 cm (6.6 in) including the very long pronotal horn. The smallest recorded beetle and the smallest free-living insect (), is the featherwing beetle Scydosella musawasensis which may measure as little as 325 μm in length. Evolution Late Paleozoic and Triassic The oldest known beetle is Coleopsis, from the earliest Permian (Asselian) of Germany, around 295 million years ago. Early beetles from the Permian, which are collectively grouped into the "Protocoleoptera" are thought to have been xylophagous (wood eating) and wood boring. Fossils from this time have been found in Siberia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czech Republic and Tshekarda in the Ural mountains, Russia. However, there are only a few fossils from North America before the middle Permian, although both Asia and North America had been united to Euramerica. The first discoveries from North America made in the Wellington Formation of Oklahoma were published in 2005 and 2008. The earliest members of modern beetle lineages appeared during the Late Permian. In the Permian–Triassic extinction event at the end of the Permian, most "protocoleopteran" lineages became extinct. Beetle diversity did not recover to pre-extinction levels until the Middle Triassic. Jurassic During the Jurassic (), there was a dramatic increase in the diversity of beetle families, including the development and growth of carnivorous and herbivorous species. The Chrysomeloidea diversified around the same time, feeding on a wide array of plant hosts from cycads and conifers to angiosperms. Close to the Upper Jurassic, the Cupedidae decreased, but the diversity of the early plant-eating species increased. Most recent plant-eating beetles feed on flowering plants or angiosperms, whose success contributed to a doubling of plant-eating species during the Middle Jurassic. However, the increase of the number of beetle families during the Cretaceous does not correlate with the increase of the number of angiosperm species. Around the same time, numerous primitive weevils (e.g. Curculionoidea) and click beetles (e.g. Elateroidea) appeared. The first jewel beetles (e.g. Buprestidae) are present, but they remained rare until the Cretaceous. The first scarab beetles were not coprophagous but presumably fed on rotting wood with the help of fungus; they are an early example of a mutualistic relationship. There are more than 150 important fossil sites from the Jurassic, the majority in Eastern Europe and North Asia. Outstanding sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian formation in Liaoning, North China, as well as the Jiulongshan formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin. Cretaceous The Cretaceous saw the fragmenting of the southern landmass, with the opening of the southern Atlantic Ocean and the isolation of New Zealand, while South America, Antarctica, and Australia grew more distant. The diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns; the Carabidae predominantly occurred in the warm regions, while the Staphylinidae and click beetles (Elateridae) preferred temperate climates. Likewise, predatory species of Cleroidea and Cucujoidea hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The diversity of jewel beetles increased rapidly, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare: their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles are from the Upper Cretaceous and may have lived on the excrement of herbivorous dinosaurs. The first species where both larvae and adults are adapted to an aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (e.g. Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae. A 2020 review of the palaeoecological interpretations of fossil beetles from Cretaceous ambers has suggested that saproxylicity was the most common feeding strategy, with fungivorous species in particular appearing to dominate. Many fossil sites worldwide contain beetles from the Cretaceous. Most are in Europe and Asia and belong to the temperate climate zone during the Cretaceous. Lower Cretaceous sites include the Crato fossil beds in the Araripe basin in the Ceará, North Brazil, as well as overlying Santana formation; the latter was near the equator at that time. In Spain, important sites are near Montsec and Las Hoyas. In Australia, the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria, are noteworthy. Major sites from the Upper Cretaceous include Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia. Cenozoic Beetle fossils are abundant in the Cenozoic; by the Quaternary (up to 1.6 mya), fossil species are identical to living ones, while from the Late Miocene (5.7 mya) the fossils are still so close to modern forms that they are most likely the ancestors of living species. The large oscillations in climate during the Quaternary caused beetles to change their geographic distributions so much that current location gives little clue to the biogeographical history of a species. It is evident that geographic isolation of populations must often have been broken as insects moved under the influence of changing climate, causing mixing of gene pools, rapid evolution, and extinctions, especially in middle latitudes. Phylogeny The very large number of beetle species poses special problems for classification. Some families contain tens of thousands of species, and need to be divided into subfamilies and tribes. Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These polyphagan beetle groups can be identified by the presence of cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders. Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), water beetles (Dytiscidae) and whirligig beetles (Gyrinidae). In these insects, the testes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs). Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle. The Archostemata have an exposed plate called the metatrochantin in front of the basal segment or coxa of the hind leg. Myxophaga contains about 65 described species in four families, mostly very small, including Hydroscaphidae and the genus Sphaerius. The myxophagan beetles are small and mostly alga-feeders. Their mouthparts are characteristic in lacking galeae and having a mobile tooth on their left mandible. The consistency of beetle morphology, in particular their possession of elytra, has long suggested that Coleoptera is monophyletic, though there have been doubts about the arrangement of the suborders, namely the Adephaga, Archostemata, Myxophaga and Polyphaga within that clade. The twisted-wing parasites, Strepsiptera, are thought to be a sister group to the beetles, having split from them in the Early Permian. Molecular phylogenetic analysis confirms that the Coleoptera are monophyletic. Duane McKenna et al. (2015) used eight nuclear genes for 367 species from 172 of 183 Coleopteran families. They split the Adephaga into 2 clades, Hydradephaga and Geadephaga, broke up the Cucujoidea into 3 clades, and placed the Lymexyloidea within the Tenebrionoidea. The Polyphaga appear to date from the Triassic. Most extant beetle families appear to have arisen in the Cretaceous. The cladogram is based on McKenna (2015). The number of species in each group (mainly superfamilies) is shown in parentheses, and boldface if over 10,000. English common names are given where possible. Dates of origin of major groups are shown in italics in millions of years ago (mya). External morphology Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra) not usable for flying. Almost all beetles have mandibles that move in a horizontal plane. The mouthparts are rarely suctorial, though they are sometimes reduced; the maxillae always bear palps. The antennae usually have 11 or fewer segments, except in some groups like the Cerambycidae (longhorn beetles) and the Rhipiceridae (cicada parasite beetles). The coxae of the legs are usually located recessed within a coxal cavity. The genitalic structures are telescoped into the last abdominal segment in all extant beetles. Beetle larvae can often be confused with those of other holometabolan groups. The beetle's exoskeleton is made up of numerous plates, called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Because there are so many species, identification is quite difficult, and relies on attributes including the shape of the antennae, the tarsal formulae and shapes of these small segments on the legs, the mouthparts, and the ventral plates (sterna, pleura, coxae). In many species accurate identification can only be made by examination of the unique male genitalic structures. Head The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and is sometimes very large. The eyes are compound and may display remarkable adaptability, as in the case of the aquatic whirligig beetles (Gyrinidae), where they are split to allow a view both above and below the waterline. A few Longhorn beetles (Cerambycidae) and weevils as well as some fireflies (Rhagophthalmidae) have divided eyes, while many have eyes that are notched, and a few have ocelli, small, simple eyes usually farther back on the head (on the vertex); these are more common in larvae than in adults. The anatomical organization of the compound eyes may be modified and depends on whether a species is primarily crepuscular, or diurnally or nocturnally active. Ocelli are found in the adult carpet beetle (as a single central ocellus in Dermestidae), some rove beetles (Omaliinae), and the Derodontidae. Beetle antennae are primarily organs of sensory perception and can detect motion, odor and chemical substances, but may also be used to physically feel a beetle's environment. Beetle families may use antennae in different ways. For example, when moving quickly, tiger beetles may not be able to see very well and instead hold their antennae rigidly in front of them in order to avoid obstacles. Certain Cerambycidae use antennae to balance, and blister beetles may use them for grasping. Some aquatic beetle species may use antennae for gathering air and passing it under the body whilst submerged. Equally, some families use antennae during mating, and a few species use them for defense. In the cerambycid Onychocerus albitarsis, the antennae have venom injecting structures used in defense, which is unique among arthropods. Antennae vary greatly in form, sometimes between the sexes, but are often similar within any given family. Antennae may be clubbed, threadlike, angled, shaped like a string of beads, comb-like (either on one side or both, bipectinate), or toothed. The physical variation of antennae is important for the identification of many beetle groups. The Curculionidae have elbowed or geniculate antennae. Feather like flabellate antennae are a restricted form found in the Rhipiceridae and a few other families. The Silphidae have a capitate antennae with a spherical head at the tip. The Scarabaeidae typically have lamellate antennae with the terminal segments extended into long flat structures stacked together. The Carabidae typically have thread-like antennae. The antennae arises between the eye and the mandibles and in the Tenebrionidae, the antennae rise in front of a notch that breaks the usually circular outline of the compound eye. They are segmented and usually consist of 11 parts, the first part is called the scape and the second part is the pedicel. The other segments are jointly called the flagellum. Beetles have mouthparts like those of grasshoppers. The mandibles appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often tooth-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages, the maxillary and labial palpi, are found around the mouth in most beetles, serving to move food into the mouth. In many species, the mandibles are sexually dimorphic, with those of the males enlarged enormously compared with those of females of the same species. Thorax The thorax is segmented into the two discernible parts, the pro- and pterothorax. The pterothorax is the fused meso- and metathorax, which are commonly separated in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle section is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen. Legs The multisegmented legs end in two to five small segments called tarsi. Like many other insect orders, beetles have claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetles use their legs for walking, legs have been variously adapted for other uses. Aquatic beetles including the Dytiscidae (diving beetles), Haliplidae, and many species of Hydrophilidae, the legs, often the last pair, are modified for swimming, typically with rows of long hairs. Male diving beetles have suctorial cups on their forelegs that they use to grasp females. Other beetles have fossorial legs widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), have enlarged femurs that help them leap. Wings The forewings of beetles are not used for flight, but form elytra which cover the hind part of the body and protect the hindwings. The elytra are usually hard shell-like structures which must be raised to allow the hindwings to move for flight. However, in the soldier beetles (Cantharidae), the elytra are soft, earning this family the name of leatherwings. Other soft wing beetles include the net-winged beetle Calopteron discrepans, which has brittle wings that rupture easily in order to release chemicals for defense. Beetles' flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. A fold (jugum) of the membrane at the base of each wing is characteristic. Some beetles have lost the ability to fly. These include some ground beetles (Carabidae) and some true weevils (Curculionidae), as well as desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, as in the glow-worms (Phengodidae), where the females resemble larvae throughout their lives. The presence of elytra and wings does not always indicate that the beetle will fly. For example, the tansy beetle walks between habitats despite being physically capable of flight. Abdomen The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle, composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft and concealed by the wings and elytra when not in flight. The pleura are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less sclerotized segment. The abdomen itself does not have any appendages, but some (for example, Mordellidae) have articulating sternal lobes. Anatomy and physiology Digestive system The digestive system of beetles is primarily adapted for a herbivorous diet. Digestion takes place mostly in the anterior midgut, although in predatory groups like the Carabidae, most digestion occurs in the crop by means of midgut enzymes. In the Elateridae, the larvae are liquid feeders that extraorally digest their food by secreting enzymes. The alimentary canal basically consists of a short, narrow pharynx, a widened expansion, the crop, and a poorly developed gizzard. This is followed by the midgut, that varies in dimensions between species, with a large amount of cecum, and the hindgut, with varying lengths. There are typically four to six Malpighian tubules. Nervous system The nervous system in beetles contains all the types found in insects, varying between different species, from three thoracic and seven or eight abdominal ganglia which can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure. Respiratory system Like most insects, beetles inhale air, for the oxygen it contains, and exhale carbon dioxide, via a tracheal system. Air enters the body through spiracles, and circulates within the haemocoel in a system of tracheae and tracheoles, through whose walls the gases can diffuse. Diving beetles, such as the Dytiscidae, carry a bubble of air with them when they dive. Such a bubble may be contained under the elytra or against the body by specialized hydrophobic hairs. The bubble covers at least some of the spiracles, permitting air to enter the tracheae. The function of the bubble is not only to contain a store of air but to act as a physical gill. The air that it traps is in contact with oxygenated water, so as the animal's consumption depletes the oxygen in the bubble, more oxygen can diffuse in to replenish it. Carbon dioxide is more soluble in water than either oxygen or nitrogen, so it readily diffuses out faster than in. Nitrogen is the most plentiful gas in the bubble, and the least soluble, so it constitutes a relatively static component of the bubble and acts as a stable medium for respiratory gases to accumulate in and pass through. Occasional visits to the surface are sufficient for the beetle to re-establish the constitution of the bubble. Circulatory system Like other insects, beetles have open circulatory systems, based on hemolymph rather than blood. As in other insects, a segmented tube-like heart is attached to the dorsal wall of the hemocoel. It has paired inlets or ostia at intervals down its length, and circulates the hemolymph from the main cavity of the haemocoel and out through the anterior cavity in the head. Specialized organs Different glands are specialized for different pheromones to attract mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments; amino acid-based pheromones of Melolonthinae are produced from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty acid-derived aldehydes and acetates. To attract a mate, fireflies (Lampyridae) use modified fat body cells with transparent surfaces backed with reflective uric acid crystals to produce light by bioluminescence. Light production is highly efficient, by oxidation of luciferin catalyzed by enzymes (luciferases) in the presence of adenosine triphosphate (ATP) and oxygen, producing oxyluciferin, carbon dioxide, and light. Tympanal organs or hearing organs consist of a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are found in two families. Several species of the genus Cicindela (Carabidae) have hearing organs on the dorsal surfaces of their first abdominal segments beneath the wings; two tribes in the Dynastinae (within the Scarabaeidae) have hearing organs just beneath their pronotal shields or neck membranes. Both families are sensitive to ultrasonic frequencies, with strong evidence indicating they function to detect the presence of bats by their ultrasonic echolocation. Reproduction and development Beetles are members of the superorder Holometabola, and accordingly most of them undergo complete metamorphosis. The typical form of metamorphosis in beetles passes through four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa sometimes is called the chrysalis. In some species, the pupa may be enclosed in a cocoon constructed by the larva towards the end of its final instar. Some beetles, such as typical members of the families Meloidae and Rhipiphoridae, go further, undergoing hypermetamorphosis in which the first instar takes the form of a triungulin. Mating Some beetles have intricate mating behaviour. Pheromone communication is often important in locating a mate. Different species use different pheromones. Scarab beetles such as the Rutelinae use pheromones derived from fatty acid synthesis and others use pheromones from organic compounds, while other scarabs such as the Melolonthinae use amino acids and terpenoids. Another way beetles find mates is seen in the fireflies (Lampyridae) which are bioluminescent, with abdominal light-producing organs. The males and females engage in a complex dialog before mating; each species has a unique combination of flight patterns, duration, composition, and intensity of the light produced. Before mating, males and females may stridulate, or vibrate the objects they are on. In the Meloidae, the male climbs onto the dorsum of the female and strokes his antennae on her head, palps, and antennae. In Eupompha, the male draws his antennae along his longitudinal vertex. They may not mate at all if they do not perform the precopulatory ritual. This mating behavior may be different amongst dispersed populations of the same species. For example, the mating of a Russian population of tansy beetle (Chrysolina graminis) is preceded by an elaborate ritual involving the male tapping the female's eyes, pronotum and antennae with its antennae, which is not evident in the population of this species in the United Kingdom. In another example, the intromittent organ of male thistle tortoise beetles is a long, tube-like structure called the flagellum which is thin and curved. When not in use, the flagellum is stored inside the abdomen of the male and can extend out to be longer than the male when needed. During mating, this organ bends to the complex shape of the female reproductive organ, which includes a coiled duct that the male must penetrate with the organ. Furthermore, these physical properties of the thistle tortioise beetle have been studied because the ability of a thin, flexible structure to harden without buckling or rupturing is mechanically challenging and may have important implications for the development of microscopic catheters in modern medicine. Competition can play a part in the mating rituals of species such as burying beetles (Nicrophorus), the insects fighting to determine which can mate. Many male beetles are territorial and fiercely defend their territories from intruding males. In such species, the male often has horns on the head or thorax, making its body length greater than that of a female. Copulation is generally quick, but in some cases lasts for several hours. During copulation, sperm cells are transferred to the female to fertilize the egg. Life cycle Egg Essentially all beetles lay eggs, though some myrmecophilous Aleocharinae and some Chrysomelinae which live in mountains or the subarctic are ovoviviparous, laying eggs which hatch almost immediately. Beetle eggs generally have smooth surfaces and are soft, though the Cupedidae have hard eggs. Eggs vary widely between species: the eggs tend to be small in species with many instars (larval stages), and in those that lay large numbers of eggs. A female may lay from several dozen to several thousand eggs during her lifetime, depending on the extent of parental care. This ranges from the simple laying of eggs under a leaf, to the parental care provided by scarab beetles, which house, feed and protect their young. The Attelabidae roll leaves and lay their eggs inside the roll for protection. Larva The larva is usually the principal feeding stage of the beetle life cycle. Larvae tend to feed voraciously once they emerge from their eggs. Some feed externally on plants, such as those of certain leaf beetles, while others feed within their food sources. Examples of internal feeders are most Buprestidae and longhorn beetles. The larvae of many beetle families are predatory like the adults (ground beetles, ladybirds, rove beetles). The larval period varies between species, but can be as long as several years. The larvae of skin beetles undergo a degree of reversed development when starved, and later grow back to the previously attained level of maturity. The cycle can be repeated many times (see Biological immortality). Larval morphology is highly varied amongst species, with well-developed and sclerotized heads, distinguishable thoracic and abdominal segments (usually the tenth, though sometimes the eighth or ninth). Beetle larvae can be differentiated from other insect larvae by their hardened, often darkened heads, the presence of chewing mouthparts, and spiracles along the sides of their bodies. Like adult beetles, the larvae are varied in appearance, particularly between beetle families. Beetles with somewhat flattened, highly mobile larvae include the ground beetles and rove beetles; their larvae are described as campodeiform. Some beetle larvae resemble hardened worms with dark head capsules and minute legs. These are elateriform larvae, and are found in the click beetle (Elateridae) and darkling beetle (Tenebrionidae) families. Some elateriform larvae of click beetles are known as wireworms. Beetles in the Scarabaeoidea have short, thick larvae described as scarabaeiform, more commonly known as grubs. All beetle larvae go through several instars, which are the developmental stages between each moult. In many species, the larvae simply increase in size with each successive instar as more food is consumed. In some cases, however, more dramatic changes occur. Among certain beetle families or genera, particularly those that exhibit parasitic lifestyles, the first instar (the planidium) is highly mobile to search out a host, while the following instars are more sedentary and remain on or within their host. This is known as hypermetamorphosis; it occurs in the Meloidae, Micromalthidae, and Ripiphoridae. The blister beetle Epicauta vittata (Meloidae), for example, has three distinct larval stages. Its first stage, the triungulin, has longer legs to go in search of the eggs of grasshoppers. After feeding for a week it moults to the second stage, called the caraboid stage, which resembles the larva of a carabid beetle. In another week it moults and assumes the appearance of a scarabaeid larva—the scarabaeidoid stage. Its penultimate larval stage is the pseudo-pupa or the coarcate larva, which will overwinter and pupate until the next spring. The larval period can vary widely. A fungus feeding staphylinid Phanerota fasciata undergoes three moults in 3.2 days at room temperature while Anisotoma sp. (Leiodidae) completes its larval stage in the fruiting body of slime mold in 2 days and possibly represents the fastest growing beetles. Dermestid beetles, Trogoderma inclusum can remain in an extended larval state under unfavourable conditions, even reducing their size between moults. A larva is reported to have survived for 3.5 years in an enclosed container. Pupa and adult As with all holometabolans, beetle larvae pupate, and from these pupae emerge fully formed, sexually mature adult beetles, or imagos. Pupae never have mandibles (they are adecticous). In most pupae, the appendages are not attached to the body and are said to be exarate; in a few beetles (Staphylinidae, Ptiliidae etc.) the appendages are fused with the body (termed as obtect pupae). Adults have extremely variable lifespans, from weeks to years, depending on the species. Some wood-boring beetles can have extremely long life-cycles. It is believed that when furniture or house timbers are infested by beetle larvae, the timber already contained the larvae when it was first sawn up. A birch bookcase 40 years old released adult Eburia quadrigeminata (Cerambycidae), while Buprestis aurulenta and other Buprestidae have been documented as emerging as much as 51 years after manufacture of wooden items. Behaviour Locomotion The elytra allow beetles to both fly and move through confined spaces, doing so by folding the delicate wings under the elytra while not flying, and folding their wings out just before takeoff. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. Some beetle species (many Cetoniinae; some Scarabaeinae, Curculionidae and Buprestidae) fly with the elytra closed, with the metathoracic wings extended under the lateral elytra margins. The altitude reached by beetles in flight varies. One study investigating the flight altitude of the ladybird species Coccinella septempunctata and Harmonia axyridis using radar showed that, whilst the majority in flight over a single location were at 150–195 m above ground level, some reached altitudes of over 1100 m. Many rove beetles have greatly reduced elytra, and while they are capable of flight, they most often move on the ground: their soft bodies and strong abdominal muscles make them flexible, easily able to wriggle into small cracks. Aquatic beetles use several techniques for retaining air beneath the water's surface. Diving beetles (Dytiscidae) hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive. Communication Beetles have a variety of ways to communicate, including the use of pheromones. The mountain pine beetle emits a pheromone to attract other beetles to a tree. The mass of beetles are able to overcome the chemical defenses of the tree. After the tree's defenses have been exhausted, the beetles emit an anti-aggregation pheromone. This species can stridulate to communicate, but others may use sound to defend themselves when attacked. Parental care Parental care is found in a few families of beetle, perhaps for protection against adverse conditions and predators. The rove beetle Bledius spectabilis lives in salt marshes, so the eggs and larvae are endangered by the rising tide. The maternal beetle patrols the eggs and larvae, burrowing to keep them from flooding and asphyxiating, and protects them from the predatory carabid beetle Dicheirotrichus gustavii and from the parasitoidal wasp Barycnemis blediator, which kills some 15% of the larvae. Burying beetles are attentive parents, and participate in cooperative care and feeding of their offspring. Both parents work to bury small animal carcass to serve as a food resource for their young and build a brood chamber around it. The parents prepare the carcass and protect it from competitors and from early decomposition. After their eggs hatch, the parents keep the larvae clean of fungus and bacteria and help the larvae feed by regurgitating food for them. Some dung beetles provide parental care, collecting herbivore dung and laying eggs within that food supply, an instance of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring. Most species of beetles do not display parental care behaviors after the eggs have been laid. Subsociality, where females guard their offspring, is well-documented in two families of Chrysomelidae, Cassidinae and Chrysomelinae. Eusociality Eusociality involves cooperative brood care (including brood care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labor into reproductive and non-reproductive groups. Few organisms outside Hymenoptera exhibit this behavior; the only beetle to do so is the weevil Austroplatypus incompertus. This Australian species lives in horizontal networks of tunnels, in the heartwood of Eucalyptus trees. It is one of more than 300 species of wood-boring Ambrosia beetles which distribute the spores of ambrosia fungi. The fungi grow in the beetles' tunnels, providing food for the beetles and their larvae; female offspring remain in the tunnels and maintain the fungal growth, probably never reproducing. Cooperative brood care is also found in the bess beetles (Passalidae) where the larvae feed on the semi-digested faeces of the adults. Feeding Beetles are able to exploit a wide diversity of food sources available in their many habitats. Some are omnivores, eating both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (Staphylinidae), among others, are primarily carnivorous and catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences. In some species, digestive ability relies upon a symbiotic relationship with fungi - some beetles have yeasts living their guts, including some yeasts previously undiscovered anywhere else. Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles in the Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles, Silphidae). Some beetles found in dung and carrion are in fact predatory. These include members of the Histeridae and Silphidae, preying on the larvae of coprophagous and necrophagous insects. Many beetles feed under bark, some feed on wood while others feed on fungi growing on wood or leaf-litter. Some beetles have special mycangia, structures for the transport of fungal spores. Ecology Anti-predator adaptations Beetles, both adults and larvae, are the prey of many animal predators including mammals from bats to rodents, birds, lizards, amphibians, fishes, dragonflies, robberflies, reduviid bugs, ants, other beetles, and spiders. Beetles use a variety of anti-predator adaptations to defend themselves. These include camouflage and mimicry against predators that hunt by sight, toxicity, and defensive behaviour. Camouflage Camouflage is common and widespread among beetle families, especially those that feed on wood or vegetation, such as leaf beetles (Chrysomelidae, which are often green) and weevils. In some species, sculpturing or various colored scales or hairs cause beetles such as the avocado weevil Heilipus apiatus to resemble bird dung or other inedible objects. Many beetles that live in sandy environments blend in with the coloration of that substrate. Mimicry and aposematism Some longhorn beetles (Cerambycidae) are effective Batesian mimics of wasps. Beetles may combine coloration with behavioural mimicry, acting like the wasps they already closely resemble. Many other beetles, including ladybirds, blister beetles, and lycid beetles secrete distasteful or toxic substances to make them unpalatable or poisonous, and are often aposematic, where bright or contrasting coloration warn off predators; many beetles and other insects mimic these chemically protected species. Chemical defense is important in some species, usually being advertised by bright aposematic colors. Some Tenebrionidae use their posture for releasing noxious chemicals to warn off predators. Chemical defenses may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes. Some species sequester chemicals from the plants they feed on, incorporating them into their own defenses. Other species have special glands to produce deterrent chemicals. The defensive glands of carabid ground beetles produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. African carabid beetles (for example, Anthia) employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed pygidial glands that empty from the sides of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers, one for hydroquinones and hydrogen peroxide, the other holding hydrogen peroxide and catalase enzymes. These chemicals mix and result in an explosive ejection, reaching a temperature of around , with the breakdown of hydroquinone to hydrogen, oxygen, and quinone. The oxygen propels the noxious chemical spray as a jet that can be aimed accurately at predators. Other defenses Large ground-dwelling beetles such as Carabidae, the rhinoceros beetle and the longhorn beetles defend themselves using strong mandibles, or heavily sclerotised (armored) spines or horns to deter or fight off predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a drop-off reflex. Some combine it with thanatosis, in which they close up their appendages and "play dead". The click beetles (Elateridae) can suddenly catapult themselves out of danger by releasing the energy stored by a click mechanism, which consists of a stout spine on the prosternum and a matching groove in the mesosternum. Some species startle an attacker by producing sounds through a process known as stridulation. Parasitism A few species of beetles are ectoparasitic on mammals. One such species, Platypsyllus castoris, parasitises beavers (Castor spp.). This beetle lives as a parasite both as a larva and as an adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorsoventrally, no doubt as an adaptation for slipping between the beavers' hairs. They are wingless and eyeless, as are many other ectoparasites. Others are kleptoparasites of other invertebrates, such as the small hive beetle (Aethina tumida) that infests honey bee nests, while many species are parasitic inquilines or commensal in the nests of ants. A few groups of beetles are primary parasitoids of other insects, feeding off of, and eventually killing their hosts. Pollination Beetle-pollinated flowers are usually large, greenish or off-white in color, and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Beetles were most likely the first insects to pollinate flowers. Most beetle-pollinated flowers are flattened or dish-shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plants' ovaries are usually well protected from the biting mouthparts of their pollinators. The beetle families that habitually pollinate flowers are the Buprestidae, Cantharidae, Cerambycidae, Cleridae, Dermestidae, Lycidae, Melyridae, Mordellidae, Nitidulidae and Scarabaeidae. Beetles may be particularly important in some parts of the world such as semiarid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa. Mutualism Mutualism is well known in a few beetles, such as the ambrosia beetle, which partners with fungi to digest the wood of dead trees. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery, so the weevils and the fungus both benefit. The beetles cannot eat the wood due to toxins, and uses its relationship with fungi to help overcome the defenses of its host tree in order to provide nutrition for their larvae. Chemically mediated by a bacterially produced polyunsaturated peroxide, this mutualistic relationship between the beetle and the fungus is coevolved. Tolerance of extreme environments About 90% of beetle species enter a period of adult diapause, a quiet phase with reduced metabolism to tide unfavourable environmental conditions. Adult diapause is the most common form of diapause in Coleoptera. To endure the period without food (often lasting many months) adults prepare by accumulating reserves of lipids, glycogen, proteins and other substances needed for resistance to future hazardous changes of environmental conditions. This diapause is induced by signals heralding the arrival of the unfavourable season; usually the cue is photoperiodic. Short (decreasing) day length serves as a signal of approaching winter and induces winter diapause (hibernation). A study of hibernation in the Arctic beetle Pterostichus brevicornis showed that the body fat levels of adults were highest in autumn with the alimentary canal filled with food, but empty by the end of January. This loss of body fat was a gradual process, occurring in combination with dehydration. All insects are poikilothermic, so the ability of a few beetles to live in extreme environments depends on their resilience to unusually high or low temperatures. The bark beetle Pityogenes chalcographus can survive whilst overwintering beneath tree bark; the Alaskan beetle Cucujus clavipes puniceus is able to withstand ; its larvae may survive . At these low temperatures, the formation of ice crystals in internal fluids is the biggest threat to survival to beetles, but this is prevented through the production of antifreeze proteins that stop water molecules from grouping together. The low temperatures experienced by Cucujus clavipes can be survived through their deliberate dehydration in conjunction with the antifreeze proteins. This concentrates the antifreezes several fold. The hemolymph of the mealworm beetle Tenebrio molitor contains several antifreeze proteins. The Alaskan beetle Upis ceramboides can survive −60 °C: its cryoprotectants are xylomannan, a molecule consisting of a sugar bound to a fatty acid, and the sugar-alcohol, threitol. Conversely, desert dwelling beetles are adapted to tolerate high temperatures. For example, the Tenebrionid beetle Onymacris rugatipennis can withstand . Tiger beetles in hot, sandy areas are often whitish (for example, Habroscelimorpha dorsalis), to reflect more heat than a darker color would. These beetles also exhibits behavioural adaptions to tolerate the heat: they are able to stand erect on their tarsi to hold their bodies away from the hot ground, seek shade, and turn to face the sun so that only the front parts of their heads are directly exposed. The fogstand beetle of the Namib Desert, Stenocara gracilipes, is able to collect water from fog, as its elytra have a textured surface combining hydrophilic (water-loving) bumps and waxy, hydrophobic troughs. The beetle faces the early morning breeze, holding up its abdomen; droplets condense on the elytra and run along ridges towards their mouthparts. Similar adaptations are found in several other Namib desert beetles such as Onymacris unguicularis. Some terrestrial beetles that exploit shoreline and floodplain habitats have physiological adaptations for surviving floods. In the event of flooding, adult beetles may be mobile enough to move away from flooding, but larvae and pupa often cannot. Adults of Cicindela togata are unable to survive immersion in water, but larvae are able to survive a prolonged period, up to 6 days, of anoxia during floods. Anoxia tolerance in the larvae may have been sustained by switching to anaerobic metabolic pathways or by reducing metabolic rate. Anoxia tolerance in the adult carabid beetle Pelophilia borealis was tested in laboratory conditions and it was found that they could survive a continuous period of up to 127 days in an atmosphere of 99.9% nitrogen at 0 °C. Migration Many beetle species undertake annual mass movements which are termed as migrations. These include the pollen beetle Meligethes aeneus and many species of coccinellids. These mass movements may also be opportunistic, in search of food, rather than seasonal. A 2008 study of an unusually large outbreak of Mountain Pine Beetle (Dendroctonus ponderosae) in British Columbia found that beetles were capable of flying 30–110 km per day in densities of up to 18,600 beetles per hectare. Relationship to humans In ancient cultures Several species of dung beetle, especially the sacred scarab, Scarabaeus sacer, were revered in Ancient Egypt. The hieroglyphic image of the beetle may have had existential, fictional, or ontologic significance. Images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals are known from the Sixth Dynasty and up to the period of Roman rule. The scarab was of prime significance in the funerary cult of ancient Egypt. The scarab was linked to Khepri, the god of the rising sun, from the supposed resemblance of the rolling of the dung ball by the beetle to the rolling of the sun by the god. Some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these are the Judean LMLK seals, where eight of 21 designs contained scarab beetles, which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. Beetles are mentioned as a symbol of the sun, as in ancient Egypt, in Plutarch's 1st century Moralia. The Greek Magical Papyri of the 2nd century BC to the 5th century AD describe scarabs as an ingredient in a spell. Pliny the Elder discusses beetles in his Natural History, describing the stag beetle: "Some insects, for the preservation of their wings, are covered with (elytra)—the beetle, for instance, the wing of which is peculiarly fine and frail. To these insects a sting has been denied by Nature; but in one large kind we find horns of a remarkable length, two-pronged at the extremities, and forming pincers, which the animal closes when it is its intention to bite." The stag beetle is recorded in a Greek myth by Nicander and recalled by Antoninus Liberalis in which Cerambus is turned into a beetle: "He can be seen on trunks and has hook-teeth, ever moving his jaws together. He is black, long and has hard wings like a great dung beetle". The story concludes with the comment that the beetles were used as toys by young boys, and that the head was removed and worn as a pendant. As pests About 75% of beetle species are phytophagous in both the larval and adult stages. Many feed on economically important plants and stored plant products, including trees, cereals, tobacco, and dried fruits. Some, such as the boll weevil, which feeds on cotton buds and flowers, can cause extremely serious damage to agriculture. The boll weevil crossed the Rio Grande near Brownsville, Texas, to enter the United States from Mexico around 1892, and had reached southeastern Alabama by 1915. By the mid-1920s, it had entered all cotton-growing regions in the US, traveling per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated, since the boll weevil entered the United States, it has cost cotton producers about $13 billion, and in recent times about $300 million per year. The bark beetle, elm leaf beetle and the Asian longhorned beetle (Anoplophora glabripennis) are among the species that attack elm trees. Bark beetles (Scolytidae) carry Dutch elm disease as they move from infected breeding sites to healthy trees. The disease has devastated elm trees across Europe and North America. Some species of beetle have evolved immunity to insecticides. For example, the Colorado potato beetle, Leptinotarsa decemlineata, is a destructive pest of potato plants. Its hosts include other members of the Solanaceae, such as nightshade, tomato, eggplant and capsicum, as well as the potato. Different populations have between them developed resistance to all major classes of insecticide. The Colorado potato beetle was evaluated as a tool of entomological warfare during World War II, the idea being to use the beetle and its larvae to damage the crops of enemy nations. Germany tested its Colorado potato beetle weaponisation program south of Frankfurt, releasing 54,000 beetles. The death watch beetle, Xestobium rufovillosum (Ptinidae), is a serious pest of older wooden buildings in Europe. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. The actual introduction of the pest into buildings is thought to take place at the time of construction. Other pests include the coconut hispine beetle, Brontispa longissima, which feeds on young leaves, seedlings and mature coconut trees, causing serious economic damage in the Philippines. The mountain pine beetle is a destructive pest of mature or weakened lodgepole pine, sometimes affecting large areas of Canada. As beneficial resources Beetles can be beneficial to human economics by controlling the populations of pests. The larvae and adults of some species of lady beetles (Coccinellidae) feed on aphids that are pests. Other lady beetles feed on scale insects, whitefly and mealybugs. If normal food sources are scarce, they may feed on small caterpillars, young plant bugs, or honeydew and nectar. Ground beetles (Carabidae) are common predators of many insect pests, including fly eggs, caterpillars, and wireworms. Ground beetles can help to control weeds by eating their seeds in the soil, reducing the need for herbicides to protect crops. The effectiveness of some species in reducing certain plant populations has resulted in the deliberate introduction of beetles in order to control weeds. For example, the genus Calligrapha is native to North America but has been used to control Parthenium hysterophorus in India and Ambrosia artemisiifolia in Russia. Dung beetles (Scarabidae) have been successfully used to reduce the populations of pestilent flies, such as Musca vetustissima and Haematobia exigua which are serious pests of cattle in Australia. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility, tilth, and nutrient cycling. The Australian Dung Beetle Project (1965–1985), introduced species of dung beetle to Australia from South Africa and Europe to reduce populations of Musca vetustissima, following successful trials of this technique in Hawaii. The American Institute of Biological Sciences reports that dung beetles, such as Euoniticellus intermedius, save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces. The Dermestidae are often used in taxidermy and in the preparation of scientific specimens, to clean soft tissue from bones. Larvae feed on and remove cartilage along with other soft tissue. As food and medicine Beetles are the most widely eaten insects, with about 344 species used as food, usually at the larval stage. The mealworm (the larva of the darkling beetle) and the rhinoceros beetle are among the species commonly eaten. A wide range of species is also used in folk medicine to treat those suffering from a variety of disorders and illnesses, though this is done without clinical studies supporting the efficacy of such treatments. As biodiversity indicators Due to their habitat specificity, many species of beetles have been suggested as suitable as indicators, their presence, numbers, or absence providing a measure of habitat quality. Predatory beetles such as the tiger beetles (Cicindelidae) have found scientific use as an indicator taxon for measuring regional patterns of biodiversity. They are suitable for this as their taxonomy is stable; their life history is well described; they are large and simple to observe when visiting a site; they occur around the world in many habitats, with species specialised to particular habitats; and their occurrence by species accurately indicates other species, both vertebrate and invertebrate. According to the habitats, many other groups such as the rove beetles in human-modified habitats, dung beetles in savannas and saproxylic beetles in forests have been suggested as potential indicator species. In art and adornment Many beetles have durable elytra that has been used as material in art, with beetlewing the best example. Sometimes, they are incorporated into ritual objects for their religious significance. Whole beetles, either as-is or encased in clear plastic, are made into objects ranging from cheap souvenirs such as key chains to expensive fine-art jewellery. In parts of Mexico, beetles of the genus Zopherus are made into living brooches by attaching costume jewelry and golden chains, which is made possible by the incredibly hard elytra and sedentary habits of the genus. In entertainment Fighting beetles are used for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. In the Chiang Mai district of northern Thailand, male Xylotrupes rhinoceros beetles are caught in the wild and trained for fighting. Females are held inside a log to stimulate the fighting males with their pheromones. These fights may be competitive and involve gambling both money and property. In South Korea the Dytiscidae species Cybister tripunctatus is used in a roulette-like game. Beetles are sometimes used as instruments: the Onabasulu of Papua New Guinea historically used the "hugu" weevil Rhynchophorus ferrugineus as a musical instrument by letting the human mouth serve as a variable resonance chamber for the wing vibrations of the live adult beetle. As pets Some species of beetle are kept as pets, for example diving beetles (Dytiscidae) may be kept in a domestic fresh water tank. In Japan the practice of keeping horned rhinoceros beetles (Dynastinae) and stag beetles (Lucanidae) is particularly popular amongst young boys. Such is the popularity in Japan that vending machines dispensing live beetles were developed in 1999, each holding up to 100 stag beetles. As things to collect Beetle collecting became extremely popular in the Victorian era. The naturalist Alfred Russel Wallace collected (by his own count) a total of 83,200 beetles during the eight years described in his 1869 book The Malay Archipelago, including 2,000 species new to science. As inspiration for technologies Several coleopteran adaptations have attracted interest in biomimetics with possible commercial applications. The bombardier beetle's powerful repellent spray has inspired the development of a fine mist spray technology, claimed to have a low carbon impact compared to aerosol sprays. Moisture harvesting behavior by the Namib desert beetle (Stenocara gracilipes) has inspired a self-filling water bottle which utilises hydrophilic and hydrophobic materials to benefit people living in dry regions with no regular rainfall. Living beetles have been used as cyborgs. A Defense Advanced Research Projects Agency funded project implanted electrodes into Mecynorhina torquata beetles, allowing them to be remotely controlled via a radio receiver held on its back, as proof-of-concept for surveillance work. Similar technology has been applied to enable a human operator to control the free-flight steering and walking gaits of Mecynorhina torquata as well as graded turning, backward walking and feedback control of Zophobas morio. Research published in 2020 sought to create a robotic camera backpack for beetles. Miniature cameras weighing 248 mg were attached to live beetles of the Tenebrionid genera Asbolus and Eleodes. The cameras filmed over a 60° range for up to 6 hours. In conservation Since beetles form such a large part of the world's biodiversity, their conservation is important, and equally, loss of habitat and biodiversity is essentially certain to impact on beetles. Many species of beetles have very specific habitats and long life cycles that make them vulnerable. Some species are highly threatened while others are already feared extinct. Island species tend to be more susceptible as in the case of Helictopleurus undatus of Madagascar which is thought to have gone extinct during the late 20th century. Conservationists have attempted to arouse a liking for beetles with flagship species like the stag beetle, Lucanus cervus, and tiger beetles (Cicindelidae). In Japan the Genji firefly, Luciola cruciata, is extremely popular, and in South Africa the Addo elephant dung beetle offers promise for broadening ecotourism beyond the big five tourist mammal species. Popular dislike of pest beetles, too, can be turned into public interest in insects, as can unusual ecological adaptations of species like the fairy shrimp hunting beetle, Cicinis bruchi.
Biology and health sciences
Beetles (Coleoptera)
null
7045
https://en.wikipedia.org/wiki/Concorde
Concorde
Concorde () is a retired Anglo-French supersonic airliner jointly developed and manufactured by Sud Aviation (later Aérospatiale) and the British Aircraft Corporation (BAC). Studies started in 1954, and France and the United Kingdom signed a treaty establishing the development project on 29 November 1962, as the programme cost was estimated at £70 million (£ in ). Construction of the six prototypes began in February 1965, and the first flight took off from Toulouse on 2 March 1969. The market was predicted for 350 aircraft, and the manufacturers received up to 100 option orders from many major airlines. On 9 October 1975, it received its French Certificate of Airworthiness, and from the UK CAA on 5 December. Concorde is a tailless aircraft design with a narrow fuselage permitting 4-abreast seating for 92 to 128 passengers, an ogival delta wing and a droop nose for landing visibility. It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed. Constructed out of aluminium, it was the first airliner to have analogue fly-by-wire flight controls. The airliner had transatlantic range while supercruising at twice the speed of sound for 75% of the distance. Delays and cost overruns increased the programme cost to £1.5–2.1 billion in 1976, (£– in ). Concorde entered service on 21 January 1976 with Air France from Paris-Roissy and British Airways from London Heathrow. Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977. Air France and British Airways remained the sole customers with seven airframes each, for a total production of twenty. Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only. Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built. On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The surviving aircraft were retired in 2003, 27 years after commercial operations had begun. All but 2 of the 20 aircraft built have been preserved and are on display across Europe and North America. Development Early studies In the early 1950s, Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study supersonic transport. The group met in February 1954 and delivered their first report in April 1955. Robert T. Jones' work at NACA had demonstrated that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of short-span, thin trapezoidal wings such as those seen on the control surfaces of many missiles, or aircraft such as the Lockheed F-104 Starfighter interceptor or the planned Avro 730 strategic bomber that the team studied. The team outlined a baseline configuration that resembled an enlarged Avro 730. This short wingspan produced little lift at low speed, resulting in long take-off runs and high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways and, to provide the fuel needed, "some horribly large aeroplanes" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics. Slender deltas Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the "slender delta". The team, including Eric Maskell whose report "Flow Separation in Three Dimensions" contributed to an understanding of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift. This had been noticed by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that the effect could be used to improve low speed performance. Küchemann's and Weber's papers changed the entire nature of supersonic design. The delta had already been used on aircraft, but these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance, but also have reasonable take-off and landing speeds using vortex generation. The aircraft would have to take off and land very "nose high" to generate the required vortex lift, which led to questions about the low speed handling qualities of such a design. Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the birth of the Concorde project. Supersonic Transport Aircraft Committee On 1 October 1956 the Ministry of Supply asked Morgan to form a new study group, the Supersonic Transport Aircraft Committee (STAC) (sometimes referred to as the Supersonic Transport Advisory Committee), to develop a practical SST design and find industry partners to build it. At the first meeting, on 5 November 1956, the decision was made to fund the development of a test-bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft demonstrated safe control at speeds as low as , about one third that of the F-104 Starfighter. STAC stated that an SST would have economic performance similar to existing subsonic types. Lift is not generated the same way at supersonic and subsonic speeds, with the lift-to-drag ratio for supersonic designs being about half that of subsonic designs. The aircraft would need more thrust than a subsonic design of the same size. But although they would use more fuel in cruise, they would be able to fly more revenue-earning flights in a given time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs. STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at Mach 1.2. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100-passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan suggested that the US was already involved in a similar project, and that if the UK failed to respond it would be locked out of an airliner market that he believed would be dominated by SST aircraft. In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed shorter-range category. Both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft, and Sud Aviation. Ogee planform selected Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes; the classic straight-edge delta, the "gothic delta" that was rounded outward to appear like a gothic arch, and the "ogival wing" that was compound-rounded into the shape of an ogee. Each of these planforms had advantages and disadvantages. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs. Generally the wing's centre of pressure (CP, or "lift point") should be close to the aircraft's centre of gravity (CG, or "balance point") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, it is common for the CG to move fore or aft. With a normal wing design this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore. To test the new wing, NASA assisted the team by modifying a Douglas F5D Skylancer to mimic the wing selection. In 1965 the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA also ran simulations at Ames that showed the aircraft would exhibit a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training. Partnership with Sud Aviation France had its own SST plans. In the late 1950s, the government requested designs from the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet powered design flying at Mach 3, and the other two were jet-powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board. As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a similar aircraft after considering the SST problem and coming to the same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked "For UK Eyes Only", had secretly been passed to France to win political favour. Sud made minor changes to the paper and presented it as their own work. France had no modern large jet engines and had already decided to buy a British design (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of heat-resistant metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed, the friction with the air heats the metal so much that it begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Everyone involved agreed that Küchemann's ogee-shaped wing was the right one. The British team was still focused on a 150-passenger design serving transatlantic routes, while France was deliberately avoiding these. Common components could be used in both designs, with the shorter range version using a clipped fuselage and four engines, and the longer one a stretched fuselage and six engines, leaving only the wing to be extensively re-designed. The teams continued to meet in 1961, and by this time it was clear that the two aircraft would be very similar in spite of different ranges and seating arrangements. A single design emerged that differed mainly in fuel load. More powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines. Cabinet response, treaty While the development teams met, the French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft told the cabinet that France was much more serious about a partnership than any of the US companies. The various US companies had proved uninterested, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of "giving away" US technological leadership to a European partner. When the STAC plans were presented to the UK cabinet, the economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be , which were repeatedly overrun in the industry. The Treasury Ministry presented a negative view, suggesting that there was no way the project would have any positive financial returns for the government, especially in light that "the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider" the cost "to turn out much too low." This led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on the topic between July and September 1962. The committee rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that it was unlikely there would be any direct positive economic outcome, but that the project should still be considered because everyone else was going supersonic, and they were concerned they would be locked out of future markets. It appeared the project would not be likely to significantly affect other, more important, research efforts. At the time, the UK was pressing for admission to the European Economic Community, and this became the main rationale for moving ahead with the aircraft. The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies and included a clause, originally asked for by the UK government, imposing heavy penalties for cancellation. This treaty was signed on 29 November 1962. Charles de Gaulle vetoed the UK's entry into the European Community in a speech on 25 January 1963. Naming At Charles de Gaulle's January 1963 press conference the aircraft was first called 'Concorde'. The name was suggested by the eighteen-year-old son of F.G. Clark, the publicity manager at BAC's Filton plant. Reflecting the treaty between the British and French governments that led to Concorde's construction, the name Concorde is from the French word concorde (), which has an English equivalent, concord. Both words mean agreement, harmony, or union. The name was changed to Concord by Harold Macmillan in response to a perceived slight by de Gaulle. At the French roll-out in Toulouse in late 1967, the British Minister of Technology, Tony Benn, announced that he would change the spelling back to Concorde. This created a nationalist uproar that died down when Benn stated that the suffixed "e" represented "Excellence, England, Europe, and Entente (Cordiale)". In his memoirs, he recounted a letter from a Scotsman claiming, "you talk about 'E' for England, but part of it is made in Scotland." Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, "it was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!" In common usage in the United Kingdom, the type is known as "Concorde" without an article, rather than " Concorde" or " Concorde". Sales efforts Advertisements for Concorde during the late 1960s placed in publications such as Aviation Week & Space Technology predicted a market for 350 aircraft by 1980. The new consortium intended to produce one long-range and one short-range version, but prospective customers showed no interest in the short-range version, thus it was later dropped. Concorde's costs spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £ million in ). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events also dampened Concorde sales prospects; the 1973–74 stock market crash and the 1973 oil crisis had made airlines cautious about aircraft with high fuel consumption, and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. A trend in favour of cheaper airline tickets also caused airlines such as Qantas to question Concorde's market suitability. During the early 2000s, Flight International described Concorde as being "one of aerospace's most ambitious but commercially flawed projects", The consortium received orders (non-binding options) for more than 100 of the long-range version from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six aircraft each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight, the options list contained 74 options from 16 airlines: Testing The design work was supported by a research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for tests of the high-speed flight envelope; the Handley Page HP.115 also provided valuable information on low-speed performance. Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at Dallas/Fort Worth Regional Airport to mark the airport's opening. Concorde had initially held a great deal of customer interest, but the project was hit by order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues of supersonic aircraftthe sonic boom, take-off noise and pollutionhad produced a change in the public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits. The US government cut federal funding for the Boeing 2707, its supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers. Programme cost The original programme cost estimate was £70 million in 1962, (£ in ). After cost overruns and delays the programme eventually cost between £1.5 and £2.1 billion in 1976, (£ – in ). This cost was the main reason the production run was much smaller than expected. Design General features Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It has an unusual tailless configuration for a commercial aircraft, as does the Tupolev Tu-144. Concorde was the first airliner to have a fly-by-wire flight-control system (in this case, analogue); the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy. Concorde pioneered the following technologies: For high speed and optimisation of flight: Double delta (ogee/ogival) shaped wings Variable engine air intake ramp system controlled by digital computers Supercruise capability For weight-saving and enhanced performance: Mach 2.02 (~) cruising speed for optimum fuel consumption (supersonic drag minimum and turbojet engines are more efficient at higher speed); fuel consumption at and at altitude of was . Mainly aluminium construction using a high-temperature alloy similar to that developed for aero-engine pistons. This material gave low weight and allowed conventional manufacture (higher speeds would have ruled out aluminium) Full-regime autopilot and autothrottle allowing "hands off" control of the aircraft from climb out to landing Fully electrically controlled analogue fly-by-wire flight controls systems High-pressure hydraulic system using for lighter hydraulic components. Air data computer (ADC) for the automated monitoring and transmission of aerodynamic measurements (total pressure, static pressure, angle of attack, side-slip). Fully electrically controlled analogue brake-by-wire system No auxiliary power unit, as Concorde would only visit large airports where ground air start carts were available. Powerplant A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag (but would be studied for future SSTs). Olympus turbojet technology was already available for development to meet the design requirements. Rolls-Royce proposed developing the RB.169 to power Concorde during its initial design phase, but developing a wholly-new engine for a single aircraft would have been extremely costly, so the existing BSEL Olympus Mk 320 turbojet engine, which was already flying in the BAC TSR-2 supersonic strike bomber prototype, was chosen instead. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone, however, Dr. Seddon of the RAE favoured a more integrated buried installation. One concern of placing two or more engines behind a single intake was that an intake failure could lead to a double or triple engine failure. While a ducted fan over the turbojet would reduce noise, its larger cross-section also incurred more drag. Acoustics specialists were confident that a turbojet's noise could be reduced and SNECMA made advances in silencer design during the programme. The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but was not pursued. By 1974, the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees". The powerplant configuration selected for Concorde highlighted airfield noise, boundary layer management and interactions between adjacent engines and the requirement that the powerplant, at Mach 2, tolerate pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws addressed most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6". Situated behind the wing leading edge, the engine intake had a wing boundary layer ahead of it. Two-thirds were diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened and caused surging. Wind tunnel testing helped define leading-edge modifications ahead of the intakes which solved the problem. Each engine had its own intake and the nacelles were paired with a splitter plate between them to minimise the chance of one powerplant influencing the other. Only above was an engine surge likely to affect the adjacent engine. The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high-pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake of air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle. As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise. Concorde's Air Intake Control Units (AICUs) made use of a digital processor for intake control. It was the first use of a digital processor with full authority control of an essential system in a passenger aircraft. It was developed by BAC's Electronics and Space Systems division after the analogue AICUs (developed by Ultra Electronics) fitted to the prototype aircraft were found to lack sufficient accuracy. Ultra Electronics also developed Concorde's thrust-by-wire engine control system. Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double-engine failure. Concorde used reheat (afterburners) only at take-off and to pass through the transonic speed range, between Mach 0.95 and 1.7. Heating problems Kinetic heating from the high speed boundary layer caused the skin to heat up during supersonic flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Apart from the engine bay, the hottest part of any supersonic aircraft's structure is the nose, due to aerodynamic heating. Hiduminium R.R. 58, an aluminium alloy, was used throughout the aircraft because it was relatively cheap and easy to work with. The highest temperature it could sustain over the life of the aircraft was , which limited the top speed to Mach 2.02. Concorde went through two cycles of cooling and heating during a flight, first cooling down as it gained altitude at subsonic speed, then heating up accelerating to cruise speed, finally cooling again when descending and slowing down before heating again in low altitude air before landing. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The airframe was designed for a life of 45,000 flying hours. As the fuselage heated up it expanded by as much as . The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight a visor was used to keep high temperature air from flowing over the cockpit skin. Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects. The white finish reduced the skin temperature by . In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations. Structural issues Due to its high speeds, large forces were applied to the aircraft during turns, causing distortion of the aircraft's structure. There were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by ratio changes between the inboard and outboard elevon deflections, varying at differing speeds including supersonic. Only the innermost elevons, attached to the stiffest area of the wings, were used at higher speeds. The narrow fuselage flexed, which was apparent to rear passengers looking along the length of the cabin. When any aircraft passes the critical mach of its airframe, the centre of pressure shifts rearwards. This causes a pitch-down moment on the aircraft if the centre of gravity remains where it was. The wings were designed to reduce this, but there was still a shift of about . This could have been countered by the use of trim controls, but at such high speeds, this would have increased drag which would have been unacceptable. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control. Range To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of powerplants which were efficient at twice the speed of sound, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. Only a modest payload could be carried and the aircraft was trimmed without using deflected control surfaces, to avoid the drag that would incur. Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It would have higher thrust engines with noise reducing features and no environmentally-objectionable afterburner. Preliminary design studies showed that an engine with a 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593 could be produced. This would have given additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s. Radiation concerns Concorde's high cruising altitude meant people on board received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below . Cabin pressurisation Airliner cabins were usually maintained at a pressure equivalent to elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, . Concorde's maximum cruising altitude was ; subsonic airliners typically cruise below . A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above , a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks. Flight characteristics While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of and an average cruise speed of , more than twice the speed of conventional aircraft. With no other civil traffic operating at its cruising altitude of about , Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a block, allowing for a slow climb from during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off. The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low-pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was . Because of this high angle, during a landing approach Concorde was on the backside of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload. Brakes and undercarriage Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation, the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation ( indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed. The four main wheel tyres on each bogie unit are inflated to . The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of , and the wheel assembly carries a spray deflector to prevent standing water from being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of . The high take-off speed of required Concorde to have upgraded brakes. Like most airliners, Concorde has anti-skid braking  to prevent the tyres from losing traction when the brakes are applied. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of . Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around . Landing Concorde required a minimum of runway length; the shortest runway Concorde ever landed on carrying commercial passengers was Cardiff Airport. Concorde G-AXDN (101) made its final landing at Duxford Aerodrome on 20 August 1977, which had a runway length of just at the time. This was the last aircraft to land at Duxford before the runway was shortened later that year. Droop nose Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining. A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes. The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used in the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of at supersonic flight, were developed by Triplex. Operational history Concorde began scheduled flights with British Airways (BA) and Air France (AF) on 21 January 1976. AF flew its last commercial flight on 30 May 2003 with BA retiring its Concorde fleet on 24 October 2003. Operators Air France British Airways Braniff International Airways operated Concordes at subsonic speed between Dulles International Airport and Dallas Fort Worth International Airport, from January 1979 until May 1980, utilizing its own flight and cabin crew, under its own insurance and operator's license. Stickers containing a US registration were placed over the French and British registrations of the aircraft during each rotation, and a placard was temporarily placed behind the cockpit to signify the operator and operator's license in command. Singapore Airlines had its livery placed on the left side of Concorde G-BOAD, and held a joint marketing agreement which saw Singapore insignias on the cabin fittings, as well as the airline's "Singapore Girl" stewardesses jointly sharing cabin duty with British Airways flight attendants. All flight crew, operations, and insurances remained solely under British Airways however, and at no point did Singapore Airlines operate Concorde services under its own operator's certification, nor wet-lease an aircraft. This arrangement initially only lasted for three flights, conducted between 9–13 December 1977; it later resumed on 24 January 1979, and operated until 1 November 1980. The Singapore livery was used on G-BOAD from 1977 to 1980. Accidents and incidents Air France Flight 4590 On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde. This crash also damaged Concorde's reputation and caused both British Airways and Air France to temporarily ground their fleets. According to the official investigation conducted by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse. Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths, but there had been two prior non-fatal accidents and a rate of tyre damage 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements made after the crash included more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. In a flight of 3 hours 20 minutes over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and then returned to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations. The first flight with passengers after the 2000 grounding landed shortly before the World Trade Center attacks in the United States. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers. Other accidents and incidents On 12 April 1989, Concorde G-BOAF, on a chartered flight from Christchurch, New Zealand, to Sydney, Australia, suffered a structural failure at supersonic speed. As the aircraft was climbing and accelerating through Mach 1.7, a "thud" was heard. The crew did not notice any handling problems, and they assumed the thud they heard was a minor engine surge. No further difficulty was encountered until descent through at Mach 1.3, when a vibration was felt throughout the aircraft, lasting two to three minutes. Most of the upper rudder had separated from the aircraft at this point. Aircraft handling was unaffected, and the aircraft made a safe landing at Sydney. The UK's Air Accidents Investigation Branch (AAIB) concluded that the skin of the rudder had been separating from the rudder structure over a period before the accident due to moisture seepage past the rivets in the rudder. Production staff had not followed proper procedures during an earlier modification of the rudder; the procedures were difficult to adhere to. The aircraft was repaired and returned to service. On 21 March 1992, G-BOAB while flying British Airways Flight 001 from London to New York, also suffered a structural failure at supersonic speed. While cruising at Mach 2, at approximately , the crew heard a "thump". No difficulties in handling were noticed, and no instruments gave any irregular indications. This crew also suspected there had been a minor engine surge. One hour later, during descent and while decelerating below Mach 1.4, a sudden "severe" vibration began throughout the aircraft. The vibration worsened when power was added to the No 2 engine. The crew shut down the No 2 engine and made a successful landing in New York, noting that increased rudder control was needed to keep the aircraft on its intended approach course. Again, the skin had separated from the structure of the rudder, which led to most of the upper rudder detaching in flight. The AAIB concluded that repair materials had leaked into the structure of the rudder during a recent repair, weakening the bond between the skin and the structure of the rudder, leading to it breaking up in flight. The large size of the repair had made it difficult to keep repair materials out of the structure, and prior to this accident, the severity of the effect of these repair materials on the structure and skin of the rudder was not appreciated. The 2010 trial involving Continental Airlines over the crash of Flight 4590 established that from 1976 until Flight 4590 there had been 57 tyre failures involving Concordes during takeoffs, including a near-crash at Dulles International Airport on 14 June 1979 involving Air France Flight 54 where a tyre blowout pierced the plane's fuel tank and damaged a left engine and electrical cables, with the loss of two of the craft's hydraulic systems. Aircraft on display Twenty Concorde aircraft were built: two prototypes, two pre-production aircraft, two development aircraft and 14 production aircraft for commercial service. With the exception of two of the production aircraft, all are preserved, mostly in museums. One aircraft was scrapped in 1994, and another was destroyed in the Air France Flight 4590 crash in 2000. Comparable aircraft Tu-144 Concorde was one of only two supersonic jetliner models to operate commercially; the other was the Soviet-built Tupolev Tu-144, which operated in the late 1970s. The Tu-144 was nicknamed "Concordski" by Western European journalists for its outward similarity to Concorde. Soviet espionage efforts allegedly stole Concorde blueprints to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144S had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler wing design. The Tu-144 required braking parachutes to land. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978. Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The Tu-144 had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing. The Tu-144 program was cancelled by the Soviet government on 1 July 1983. SST and others The main competing designs for the US government-funded supersonic transport (SST) were the swing-wing Boeing 2707 and the compound delta wing Lockheed L-2000. These were to have been larger, with seating for up to 300 people. The Boeing 2707 was selected for development. Concorde first flew in 1969, the year Boeing began building 2707 mockups after changing the design to a cropped delta wing; the cost of this and other changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971 having spent more than $1 billion without any aircraft being built. Impact Environmental Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director stated, "It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them." Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 per cent drop in global ozone levels, much higher than previously thought. Each 1 per cent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 per cent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport. Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990. Public perception Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts. As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage. The aircraft was usually referred to by the British as simply "Concorde". In France it was known as "le Concorde" due to "le", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun concorde means "agreement, harmony, or peace". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as "she" or "her". In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC (through The Culture Show) and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type car, the Tube map, the World Wide Web, the K2 red telephone box and the Supermarine Spitfire. Special missions The heads of France and the United Kingdom flew in Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagship aircraft on foreign visits. Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989. Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-Le Bourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of 30 June 1973 and again for the total solar eclipse on 11 August 1999. Records The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by the British Airways G-BOAD in 2 hours, 52 minutes, 59 seconds from take-off to touchdown aided by a 175 mph (282 km/h) tailwind. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops. Concorde set the FAI "Westbound Around the World" and "Eastbound Around the World" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first voyage to the New World, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain. The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco. On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely-populated Canadian territory. Specifications Notable appearances in media
Technology
Specific aircraft_2
null
7053
https://en.wikipedia.org/wiki/Cannon
Cannon
A cannon is a large-caliber gun classified as a type of artillery, which usually launches a projectile using explosive chemical propellant. Gunpowder ("black powder") was the primary propellant before the invention of smokeless powder during the late 19th century. Cannons vary in gauge, effective range, mobility, rate of fire, angle of fire and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. A cannon is a type of heavy artillery weapon. The word cannon is derived from several languages, in which the original definition can usually be translated as tube, cane, or reed. In the modern era, the term cannon has fallen into decline, replaced by guns or artillery, if not a more specific term such as howitzer or mortar, except for high-caliber automatic weapons firing bigger rounds than machine guns, called autocannons. The earliest known depiction of cannons appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannons do not appear until the 13th century. In 1288, Yuan dynasty troops are recorded to have used hand cannons in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the early 14th century, possible mentions of cannon had appeared in the Middle East and the depiction of one in Europe by 1326. Recorded usage of cannon began appearing almost immediately after. They subsequently spread to India, their usage on the subcontinent being first attested to in 1366. By the end of the 14th century, cannons were widespread throughout Eurasia. Cannons were used primarily as anti-infantry weapons until around 1374, when large cannons were recorded to have breached walls for the first time in Europe. Cannons featured prominently as siege weapons, and ever larger pieces appeared. In 1464 a cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannons as field artillery became more important after 1453 when cannons broke down the walls of the Roman Empire's capital, with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannons reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s. Etymology and terminology The word cannon is derived from the Old Italian word , meaning "large tube", which came from the Latin , in turn originating from the Greek (), "reed", and then generalised to mean any hollow tube-like object. The word has been used to refer to a gun since 1326 in Italy and 1418 in England. Both of the plural forms cannons and cannon are correct. History East Asia The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm. Projectiles such as iron scraps or porcelain shards, mixed together with the gunpowder ("co-viative"), were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal. The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however, the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is in length and weighs . The other cannons are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the History of Yuan reports a battle took place involving hand cannons. According to the History of Yuan, in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannons into battle against the rebel prince Nayan. Chen Bingying argues there were no guns before 1259, while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the "true" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century.
Technology
Artillery and siege
null
7056
https://en.wikipedia.org/wiki/Computer%20mouse
Computer mouse
A computer mouse (plural mice, also mouses) is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer (called a cursor) on a display, which allows a smooth control of the graphical user interface of a computer. The first public demonstration of a mouse controlling a computer system was done by Doug Engelbart in 1968 as part of the Mother of All Demos. Mice originally used two separate wheels to directly track movement across a surface: one in the x-dimension and one in the Y. Later, the standard design shifted to use a ball rolling on a surface to detect motion, in turn connected to internal rollers. Most modern mice use optical movement detection with no moving parts. Though originally all mice were connected to a computer by a cable, many modern mice are cordless, relying on short-range radio communication with the connected system. In addition to moving a cursor, computer mice have one or more buttons to allow operations such as the selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and scroll wheels, which enable additional control and dimensional input. Etymology The earliest known written use of the term mouse or mice in reference to a computer pointing device is in Bill English's July 1965 publication, "Computer-Aided Display Control". This likely originated from its resemblance to the shape and size of a mouse, with the cord resembling its tail. The popularity of wireless mice without cords makes the resemblance less obvious. According to Roger Bates, a hardware designer under English, the term also came about because the cursor on the screen was, for an unknown reason, referred to as "CAT" and was seen by the team as if it would be chasing the new desktop device. The plural for the small rodent is always "mice" in modern usage. The plural for a computer mouse is either "mice" or "mouses" according to most dictionaries, with "mice" being more common. The first recorded plural usage is "mice"; the online Oxford Dictionaries cites a 1984 use, and earlier uses include J. C. R. Licklider's "The Computer as a Communication Device" of 1968. History Stationary trackballs The trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called the Comprehensive Display System (CDS). Benjamin was then working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a "roller ball" for this purpose. The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built, and the device was kept as a military secret. Another early trackball was built by Kenyon Taylor, a British electrical engineer working in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in 1952. DATAR was similar in concept to Benjamin's display. The trackball used four disks to pick up motion, two each for the X and Y directions. Several rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was not patented, since it was a secret military project. Engelbart's first "mouse" Douglas Engelbart of the Stanford Research Institute (now SRI International) has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, and several others as the inventor of the computer mouse. Engelbart was also recognized as such in various obituary titles after his death in July 2013. By 1963, Engelbart had already established a research lab at SRI, the Augmentation Research Center (ARC), to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence. That November, while attending a conference on computer graphics in Reno, Nevada, Engelbart began to ponder how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data. On 14 November 1963, he first recorded his thoughts in his personal notebook about something he initially called a "bug", which is a "3-point" form could have a "drop point and 2 orthogonal wheels". He wrote that the "bug" would be "easier" and "more natural" to use, and unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard". In 1964, Bill English joined ARC, where he helped Engelbart build the first mouse prototype. They christened the device the mouse as early models had a cord attached to the rear part of the device which looked like a tail, and in turn, resembled the common mouse. According to Roger Bates, a hardware designer under English, another reason for choosing this name was because the cursor on the screen was also referred to as "CAT" at this time. As noted above, this "mouse" was first mentioned in print in a July 1965 report, on which English was the lead author. On 9 December 1968, Engelbart publicly demonstrated the mouse at what would come to be known as The Mother of All Demos. Engelbart never received any royalties for it, as his employer SRI held the patent, which expired before the mouse became widely used in personal computers. In any event, the invention of the mouse was just a small part of Engelbart's much larger project of augmenting human intellect. Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its speed and convenience. The first mouse, a bulky device (pictured) used two potentiometers perpendicular to each other and connected to wheels: the rotation of each wheel translated into motion along one axis. At the time of the "Mother of All Demos", Engelbart's group had been using their second-generation, 3-button mouse for about a year. First rolling-ball mouse On 2 October 1968, three years after Engelbart's prototype but more than two months before his public demo, a mouse device named (German for "Trackball control") was shown in a sales brochure by the German company AEG-Telefunken as an optional input device for the SIG 100 vector graphics terminal, part of the system around their process computer TR 86 and the main frame. Based on an even earlier trackball device, the mouse device had been developed by the company in 1966 in what had been a parallel and independent discovery. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball (diameter 40 mm, weight 40 g) and two mechanical 4-bit rotational position transducers with Gray code-like states, allowing easy movement in any direction. The bits remained stable for at least two successive states to relax debouncing requirements. This arrangement was chosen so that the data could also be transmitted to the TR 86 front-end process computer and over longer distance telex lines with 50 baud. Weighing , the device with a total height of about came in a diameter hemispherical injection-molded thermoplastic casing featuring one central push button. As noted above, the device was based on an earlier trackball-like device (also named ) that was embedded into radar flight control desks. This trackball had been originally developed by a team led by at Telefunken for the German (Federal Air Traffic Control). It was part of the corresponding workstation system SAP 300 and the terminal SIG 3001, which had been designed and developed since 1963. Development for the TR 440 main frame began in 1965. This led to the development of the TR 86 process computer system with its SIG 100-86 terminal. Inspired by a discussion with a university customer, Mallebrein came up with the idea of "reversing" the existing trackball into a moveable mouse-like device in 1966, so that customers did not have to be bothered with mounting holes for the earlier trackball device. The device was finished in early 1968, and together with light pens and trackballs, it was commercially offered as an optional input device for their system starting later that year. Not all customers opted to buy the device, which added costs of per piece to the already up to 20-million DM deal for the main frame, of which only a total of 46 systems were sold or leased. They were installed at more than 20 German universities including RWTH Aachen, Technische Universität Berlin, University of Stuttgart and Konstanz. Several mice installed at the Leibniz Supercomputing Centre in Munich in 1972 are well preserved in a museum, two others survived in a museum at Stuttgart University, two in Hamburg, the one from Aachen at the Computer History Museum in the US, and yet another sample was recently donated to the Heinz Nixdorf MuseumsForum (HNF) in Paderborn. Anecdotal reports claim that Telefunken's attempt to patent the device was rejected by the German Patent Office due to lack of inventiveness. For the air traffic control system, the Mallebrein team had already developed a precursor to touch screens in form of an ultrasonic-curtain-based pointing device in front of the display. In 1970, they developed a device named "Touchinput-" ("touch input device") based on a conductively coated glass screen. First mice on personal computers and workstations The Xerox Alto was one of the first computers designed for individual use in 1973 and is regarded as the first modern computer to use a mouse. Alan Kay designed the 16-by-16 mouse cursor icon with its left edge vertical and right edge 45-degrees so it displays well on the bitmap.Inspired by PARC's Alto, the Lilith, a computer which had been developed by a team around Niklaus Wirth at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star in 1981. By 1982, the Xerox 8010 was probably the best-known computer with a mouse. The Sun-1 also came with a mouse, and the forthcoming Apple Lisa was rumored to use one, but the peripheral remained obscure; Jack Hawley of The Mouse House reported that one buyer for a large organization believed at first that his company sold lab mice. Hawley, who manufactured mice for Xerox, stated that "Practically, I have the market all to myself right now"; a Hawley mouse cost $415. In 1982, Logitech introduced the P4 Mouse at the Comdex trade show in Las Vegas, its first hardware mouse. That same year Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible, and developed the first PC-compatible mouse. The Microsoft Mouse shipped in 1983, thus beginning the Microsoft Hardware division of the company. However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button Lisa Mouse) in 1984, and of the Amiga 1000 and the Atari ST in 1985. Operation A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer. The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or pointing (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor points at this icon might cause a text editing program to open the file in a window. Different ways of operating the mouse cause specific things to happen in the GUI: Point: stop the motion of the pointer while it is inside the boundaries of what the user wants to interact with. This act of pointing is what the "pointer" and "pointing device" are named after. In web design lingo, pointing is referred to as "hovering". This usage spread to web programming and Android programming, and is now found in many contexts. Click: pressing and releasing a button. (left) Single-click: clicking the main button. (left) Double-click: clicking the button two times in quick succession counts as a different gesture than two separate single clicks. (left) Triple-click: clicking the button three times in quick succession counts as a different gesture than three separate single clicks. Triple clicks are far less common in traditional navigation. Right-click: clicking the secondary button. In modern applications, this frequently opens a context menu. Middle-click: clicking the tertiary button. In most cases, this is also the scroll wheel. Clicking the fourth button. Clicking the fifth button. The USB standard defines up to 65535 distinct buttons for mice and other such devices, although in practice buttons above 3 are rarely implemented. Drag: pressing and holding a button, and moving the mouse before releasing the button. This is frequently used to move or copy files or other objects via drag and drop; other uses include selecting text and drawing in graphics applications. Mouse button chording or chord clicking: Clicking with more than one button simultaneously. Clicking while simultaneously typing a letter on the keyboard. Clicking and rolling the mouse wheel simultaneously. Clicking while holding down a modifier key. Moving the pointer a long distance: When a practical limit of mouse movement is reached, one lifts up the mouse, brings it to the opposite edge of the working area while it is held above the surface, and then lowers it back onto the working surface. This is often not necessary, because acceleration software detects fast movement, and moves the pointer significantly faster in proportion than for slow mouse motion. Multi-touch: this method is similar to a multi-touch touchpad on a laptop with support for tap input for multiple fingers, the most famous example being the Apple Magic Mouse. Gestures Gestural interfaces have become an integral part of modern computing, allowing users to interact with their devices in a more intuitive and natural way. In addition to traditional pointing-and-clicking actions, users can now employ gestural inputs to issue commands or perform specific actions. These stylized motions of the mouse cursor, known as "gestures", have the potential to enhance user experience and streamline workflow. To illustrate the concept of gestural interfaces, let's consider a drawing program as an example. In this scenario, a user can employ a gesture to delete a shape on the canvas. By rapidly moving the mouse cursor in an "x" motion over the shape, the user can trigger the command to delete the selected shape. This gesture-based interaction enables users to perform actions quickly and efficiently without relying solely on traditional input methods. While gestural interfaces offer a more immersive and interactive user experience, they also present challenges. One of the primary difficulties lies in the requirement of finer motor control from users. Gestures demand precise movements, which can be more challenging for individuals with limited dexterity or those who are new to this mode of interaction. However, despite these challenges, gestural interfaces have gained popularity due to their ability to simplify complex tasks and improve efficiency. Several gestural conventions have become widely adopted, making them more accessible to users. One such convention is the drag and drop gesture, which has become pervasive across various applications and platforms. The drag and drop gesture is a fundamental gestural convention that enables users to manipulate objects on the screen seamlessly. It involves a series of actions performed by the user: Pressing the mouse button while the cursor hovers over an interface object. Moving the cursor to a different location while holding the button down. Releasing the mouse button to complete the action. This gesture allows users to transfer or rearrange objects effortlessly. For instance, a user can drag and drop a picture representing a file onto an image of a trash can, indicating the intention to delete the file. This intuitive and visual approach to interaction has become synonymous with organizing digital content and simplifying file management tasks. In addition to the drag and drop gesture, several other semantic gestures have emerged as standard conventions within the gestural interface paradigm. These gestures serve specific purposes and contribute to a more intuitive user experience. Some of the notable semantic gestures include: Crossing-based goal: This gesture involves crossing a specific boundary or threshold on the screen to trigger an action or complete a task. For example, swiping across the screen to unlock a device or confirm a selection. Menu traversal: Menu traversal gestures facilitate navigation through hierarchical menus or options. Users can perform gestures such as swiping or scrolling to explore different menu levels or activate specific commands. Pointing: Pointing gestures involve positioning the mouse cursor over an object or element to interact with it. This fundamental gesture enables users to select, click, or access contextual menus. Mouseover (pointing or hovering): Mouseover gestures occur when the cursor is positioned over an object without clicking. This action often triggers a visual change or displays additional information about the object, providing users with real-time feedback. These standard semantic gestures, along with the drag and drop convention, form the building blocks of gestural interfaces, allowing users to interact with digital content using intuitive and natural movements. Specific uses At the end of 20th century, digitizer mice (puck) with magnifying glass was used with AutoCAD for the digitizations of blueprints. Other uses of the mouse's input occur commonly in special application domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual objects' or camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate so that all sides can be examined. 3D design and animation software often modally chord many different combinations to allow objects and cameras to be rotated and moved through space with the few axes of movement mice can detect. When mice have more than one button, the software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button. Types Mechanical mice The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC. The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required. The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward-backward motion of the mouse and the other the left-right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen. The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product. Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent"; though optical mice from Mouse Systems had incorporated microprocessors by 1984. Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example. Optical and laser mice Early optical mice relied entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light. The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes provide good resolution and precision, improving performance on opaque specular surfaces. Later, more surface-independent optical mice use an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected. Inertial and gyroscopic mice Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture. 3D mice A 3D mouse is a computer input device for viewport interaction with at least three degrees of freedom (DoF), e.g. in 3D computer graphics software for manipulating virtual objects, navigating in the viewport, defining camera paths, posing, and desktop motion capture. 3D mice can also be used as spatial controllers for video game interaction, e.g. SpaceOrb 360. To perform such different tasks the used transfer function and the device stiffness are essential for efficient interaction. Transfer function The virtual motion is connected to the 3D mouse control handle via a transfer function. Position control means that the virtual position and orientation is proportional to the mouse handle's deflection whereas velocity control means that translation and rotation velocity of the controlled object is proportional to the handle deflection. A further essential property of a transfer function is its interaction metaphor: Object-in-hand metaphor: An exterocentrical metaphor whereby the scene moves in correspondence with the input device. If the handle of the input device is twisted clockwise the scene rotates clockwise. If the handle is moved left the scene shifts left, and so on. Camera-in-hand metaphor: An egocentrical metaphor whereby the user's view is controlled by direct movement of a virtual camera. If the handle is twisted clockwise the scene rotates counter-clockwise. If the handle is moved left the scene shifts right, and so on. Ware and Osborne performed an experiment investigating these metaphors whereby it was shown that there is no single best metaphor. For manipulation tasks, the object-in-hand metaphor was superior, whereas for navigation tasks the camera-in-hand metaphor was superior. Device stiffness Zhai used and the following three categories for device stiffness: Isotonic Input: An input device with zero stiffness, that is, there is no self-centering effect. Elastic Input: A device with some stiffness, that is, the forces on the handle are proportional to the deflections. Isometric Input: An elastic input device with infinite stiffness, that is, the device handle does not allow any deflection but records force and torque. Isotonic 3D mice Logitech 3D Mouse (1990) was the first ultrasonic mouse and is an example of an isotonic 3D mouse having six degrees of freedom (6DoF). Isotonic devices have also been developed with less than 6DoF, e.g. the Inspector at Technical University of Denmark (5DoF input). Other examples of isotonic 3D mice are motion controllers, i.e. is a type of game controller that typically uses accelerometers to track motion. Motion tracking systems are also used for motion capture e.g. in the film industry, although that these tracking systems are not 3D mice in a strict sense, because motion capture only means recording 3D motion and not 3D interaction. Isometric 3D mice Early 3D mice for velocity control were almost ideally isometric, e.g. SpaceBall 1003, 2003, 3003, and a device developed at Deutsches Zentrum für Luft und Raumfahrt (DLR), cf. US patent US4589810A. Elastic 3D mice At DLR an elastic 6DoF sensor was developed that was used in Logitech's SpaceMouse and in the products of 3DConnexion. SpaceBall 4000 FLX has a maximum deflection of approximately at a maximum force of approximately 10N, that is, a stiffness of approximately . SpaceMouse has a maximum deflection of at a maximum force of , that is, a stiffness of approximately . Taking this development further, the softly elastic Sundinlabs SpaceCat was developed. SpaceCat has a maximum translational deflection of approximately and maximum rotational deflection of approximately 30° at a maximum force less than 2N, that is, a stiffness of approximately . With SpaceCat Sundin and Fjeld reviewed five comparative experiments performed with different device stiffness and transfer functions and performed a further study comparing 6DoF softly elastic position control with 6DoF stiffly elastic velocity control in a positioning task. They concluded that for positioning tasks position control is to be preferred over velocity control. They could further conjecture the following two types of preferred 3D mouse usage: Positioning, manipulation, and docking using isotonic or softly elastic position control and an object-in-hand metaphor. Navigation using softly or stiffly elastic rate control and a camera-in-hand metaphor. 3DConnexion's 3D mice have been commercially successful over decades. They are used in combination with the conventional mouse for CAD. The Space Mouse is used to orient the target object or change the viewpoint with the non-dominant hand, whereas the dominant hand operates the computer mouse for conventional CAD GUI operation. This is a kind of space-multiplexed input where the 6 DoF input device acts as a graspable user interface that is always connected to the view port. Force feedback With force feedback the device stiffness can dynamically be adapted to the task just performed by the user, e.g. performing positioning tasks with less stiffness than navigation tasks. Tactile mice In 2000, Logitech introduced a "tactile mouse" known as the "iFeel Mouse" developed by Immersion Corporation that contained a small actuator to enable the mouse to generate simulated physical sensations. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf the internet by touch-enabled mouse was first developed in 1996 and first implemented commercially by the Wingman Force Feedback Mouse. It requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed. Pucks Tablet digitizers are sometimes used with accessories called pucks, devices which rely on absolute positioning, but can be configured for sufficiently mouse-like relative tracking that they are sometimes marketed as mice. Ergonomic mice As the name suggests, this type of mouse is intended to provide optimum comfort and avoid injuries such as carpal tunnel syndrome, arthritis, and other repetitive strain injuries. It is designed to fit natural hand position and movements, to reduce discomfort. When holding a typical mouse, the ulna and radius bones on the arm are crossed. Some designs attempt to place the palm more vertically, so the bones take more natural parallel position. Increasing mouse height and angling the mouse topcase can improve wrist posture without negatively affecting performance. Some limit wrist movement, encouraging arm movement instead, that may be less precise but more optimal from the health point of view. A mouse may be angled from the thumb downward to the opposite side – this is known to reduce wrist pronation. However such optimizations make the mouse right or left hand specific, making more problematic to change the tired hand. Time has criticized manufacturers for offering few or no left-handed ergonomic mice: "Oftentimes I felt like I was dealing with someone who'd never actually met a left-handed person before." Another solution is a pointing bar device. The so-called roller bar mouse is positioned snugly in front of the keyboard, thus allowing bi-manual accessibility. Gaming mice These mice are specifically designed for use in computer games. They typically employ a wider array of controls and buttons and have designs that differ radically from traditional mice. They may also have decorative monochrome or programmable RGB LED lighting. The additional buttons can often be used for changing the sensitivity of the mouse or they can be assigned (programmed) to macros (i.e., for opening a program or for use instead of a key combination). It is also common for game mice, especially those designed for use in real-time strategy games such as StarCraft, or in multiplayer online battle arena games such as League of Legends to have a relatively high sensitivity, measured in dots per inch (DPI), which can be as high as 25,600. DPI and CPI are the same values that refer to the mouse's sensitivity. DPI is a misnomer used in the gaming world, and many manufacturers use it to refer to CPI, counts per inch. Some advanced mice from gaming manufacturers also allow users to adjust the weight of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is also an important factor in gaming mouse, as extended gameplay times may render further use of the mouse to be uncomfortable. Some mice have been designed to have adjustable features such as removable and/or elongated palm rests, horizontally adjustable thumb rests and pinky rests. Some mice may include several different rests with their products to ensure comfort for a wider range of target consumers. Gaming mice are held by gamers in three styles of grip: Palm Grip: the hand rests on the mouse, with extended fingers. Claw Grip: palm rests on the mouse, bent fingers. Finger-Tip Grip: bent fingers, palm does not touch the mouse. Connectivity and communication protocols To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB, or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses. While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer. Mouse use in DOS applications became more common after the introduction of the Microsoft Mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a driver that implements the same API, even if the mouse hardware itself was incompatible with Microsoft's. This driver provides the state of the buttons and the distance the mouse has moved in units that its documentation calls "mickeys". Early mice In the 1970s, the Xerox Alto mouse, and in the 1980s the Xerox optical mouse, used a quadrature-encoded X and Y interface. This two-bit encoding per dimension had the property that only one bit of the two would change at a time, like a Gray code or Johnson counter, so that the transitions would not be misinterpreted when asynchronously sampled. The earliest mass-market mice, such as the original Macintosh, Amiga, and Atari ST mice used a D-subminiature 9-pin connector to send the quadrature-encoded X and Y axis signals directly, plus one pin per mouse button. The mouse was a simple optomechanical device, and the decoding circuitry was all in the main computer. The DE-9 connectors were designed to be electrically compatible with the joysticks popular on numerous 8-bit systems, such as the Commodore 64 and the Atari 2600. Although the ports could be used for both purposes, the signals must be interpreted differently. As a result, plugging a mouse into a joystick port causes the "joystick" to continuously move in some direction, even if the mouse stays still, whereas plugging a joystick into a mouse port causes the "mouse" to only be able to move a single pixel in each direction. Serial interface and protocol Because the IBM PC did not have a quadrature decoder built in, early PC mice used the RS-232C serial port to communicate encoded mouse movements, as well as provide power to the mouse's circuits. The Mouse Systems Corporation (MSC) version used a five-byte protocol and supported three buttons. The Microsoft version used a three-byte protocol and supported two buttons. Due to the incompatibility between the two protocols, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode. Apple Desktop Bus In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy chaining of up to 16 devices, including mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to device communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when Apple's iMac line of computers joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005. PS/2 interface and protocol With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 port for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin MIDI style full sized DIN 41524 connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format: Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors. A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backward compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five). Mouse vendors also use other extended formats, often without providing public documentation. The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them. For 3D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s, Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin. USB Almost all wired mice today use USB and the USB human interface device class for communication. Cordless or wireless Cordless or wireless mice transmit data via radio. Some mice connect to the computer through Bluetooth or Wi-Fi, while others use a receiver that plugs into the computer, for example through a USB port. Many mice that use a USB receiver have a storage compartment for it inside the mouse. Some "nano receivers" are designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove. Operating system support MS-DOS and Windows 1.0 support connecting a mouse such as a Microsoft Mouse via multiple interfaces: BallPoint, Bus (InPort), Serial port or PS/2. Windows 98 added built-in support for USB Human Interface Device class (USB HID), with native vertical scrolling support. Windows 2000 and Windows Me expanded this built-in support to 5-button mice. Windows XP Service Pack 2 introduced a Bluetooth stack, allowing Bluetooth mice to be used without any USB receivers. Windows Vista added native support for horizontal scrolling and standardized wheel movement granularity for finer scrolling. Windows 8 introduced BLE (Bluetooth Low Energy) mouse/HID support. Multiple-mouse systems Some systems allow two or more mice to be used at once as input devices. Late-1980s era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around. Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices. Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces. Windows also has full support for multiple input/mouse configurations for multi-user environments. Starting with Windows XP, Microsoft introduced an SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points. However, it no longer appears to be available. The introduction of Windows Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, they were designed with other input technologies like touch and image in mind. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen. , Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage. There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications. Buttons Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound. Since around the late 1990s, the three-button scrollmouse has become the de facto standard. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software. Scrolling Nearly all mice now have an integrated input primarily intended for scrolling on top, usually a single-axis digital wheel or rocker switch which can also be depressed to act as a third button. Though less common, many mice instead have two-axis inputs such as a tiltable wheel, trackball, or touchpad. Those with a trackball may be designed to stay stationary, using the trackball instead of moving the mouse. Speed Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse, where direction is often expressed as "horizontal" versus "vertical" mickey count. However, speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per mickey, pixels per inch, or pixels per centimeter. The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI)the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). The mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. For simple software, when the mouse starts to move, the software will count the number of "counts" or "mickeys" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, with good precision. When the movement of the mouse passes the value set for some threshold, the software will start to move the cursor faster, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting. Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response. Mousepads Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance. The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist. Most optical and laser mice do not require a pad, the notable exception being early optical mice which relied on a grid on the pad to detect movement (e.g. Mouse Systems). Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface, such as glass. Some mice also come with small "pads" attached to the bottom surface, also called mouse feet or mouse skates, that help the user slide the mouse smoothly across surfaces. In the marketplace Around 1981, Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use. The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS). The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. In November 2008, Logitech built their billionth mouse. Use in games The device often functions as an interface for PC-based computer games and sometimes for video game consoles. The Classic Mac OS Desk Accessory Puzzle in 1984 was the first game designed specifically for a mouse. First-person shooters FPSs naturally lend themselves to separate and simultaneous control of the player's movement and aim, and on computers this has traditionally been achieved with a combination of keyboard and mouse. Players use the X-axis of the mouse for looking (or turning) left and right, and the Y-axis for looking up and down; the keyboard is used for movement and supplemental inputs. Many shooting genre players prefer a mouse over a gamepad analog stick because the wide range of motion offered by a mouse allows for faster and more varied control. Although an analog stick allows the player more granular control, it is poor for certain movements, as the player's input is relayed based on a vector of both the stick's direction and magnitude. Thus, a small but fast movement (known as "flick-shotting") using a gamepad requires the player to quickly move the stick from its rest position to the edge and back again in quick succession, a difficult maneuver. In addition the stick also has a finite magnitude; if the player is currently using the stick to move at a non-zero velocity their ability to increase the rate of movement of the camera is further limited based on the position their displaced stick was already at before executing the maneuver. The effect of this is that a mouse is well suited not only to small, precise movements but also to large, quick movements and immediate, responsive movements; all of which are important in shooter gaming. This advantage also extends in varying degrees to similar game styles such as third-person shooters. Some incorrectly ported games or game engines have acceleration and interpolation curves which unintentionally produce excessive, irregular, or even negative acceleration when used with a mouse instead of their native platform's non-mouse default input device. Depending on how deeply hardcoded this misbehavior is, internal user patches or external 3rd-party software may be able to fix it. Individual game engines will also have their own sensitivities. This often restricts one from taking a game's existing sensitivity, transferring it to another, and acquiring the same 360 rotational measurements. A sensitivity converter is the preferred tool that FPS gamers use to translate correctly the rotational movements between different mice and between different games. Calculating the conversion values manually is also possible but it is more time-consuming and requires performing complex mathematical calculations, while using a sensitivity converter is a lot faster and easier for gamers. Due to their similarity to the WIMP desktop metaphor interface for which mice were originally designed, and to their own tabletop game origins, computer strategy games are most commonly played with mice. In particular, real-time strategy and MOBA games usually require the use of a mouse. The left button usually controls primary fire. If the game supports multiple fire modes, the right button often provides secondary fire from the selected weapon. Games with only a single fire mode will generally map secondary fire to aim down the weapon sights. In some games, the right button may also invoke accessories for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer. Players can use a scroll wheel for changing weapons (or for controlling scope-zoom magnification, in older games). On most first person shooter games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD for moving forward, left, backward, and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice. In some cases the right mouse button may be used to move the player forward, either in lieu of, or in conjunction with the typical WASD configuration. Many games provide players with the option of mapping their own choice of a key or button to a certain control. An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse toward the opponent. Games using mice for input are so popular that many manufacturers make mice specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI. Mouse Bungees are typically used with gaming mice because it eliminates the annoyance of the cable. Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration. After id Software's commercial hit of Doom, which did not support vertical aiming, competitor Bungie's Marathon became the first first-person shooter to support using the mouse to aim up and down. Games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released Quake, which introduced the invert feature as users know it. Home consoles In 1988, the VTech Socrates educational video game console featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s, the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. A mouse was also released for the Nintendo 64, although it was only released in Japan. The 1992 game Mario Paint in particular used the mouse's capabilities, as did its Japanese-only successor Mario Artist on the N64 for its 64DD disk drive peripheral in 1999. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony released an official mouse product for the PlayStation console, included one along with the Linux for PlayStation 2 kit, as well as allowing owners to use virtually any USB mouse with the PS2, PS3, and PS4. Nintendo's Wii also had this feature implemented in a later software update, and this support was retained on its successor, the Wii U. Microsoft's Xbox line of game consoles (which used operaring systems based on modified versions of Windows NT) also had universal-wide mouse support using USB.
Technology
User interface
null
7063
https://en.wikipedia.org/wiki/Catapult
Catapult
A catapult is a ballistic device used to launch a projectile a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms. During wars in the ancient times, the catapult was usually known to be the strongest heavy weaponry. In modern times the term can apply to devices ranging from a simple hand-held implement (also called a "slingshot") to a mechanism for launching aircraft from a ship. The earliest catapults date to at least the 7th century BC, with King Uzziah of Judah recorded as equipping the walls of Jerusalem with machines that shot "great stones". Catapults are mentioned in Yajurveda under the name "Jyah" in chapter 30, verse 7. In the 5th century BC the mangonel appeared in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his 5th century BC war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC. Etymology The word 'catapult' comes from the Latin 'catapulta', which in turn comes from the Greek (katapeltēs), itself from κατά (kata), "downwards" and πάλλω (pallō), "to toss, to hurl". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan King Ajatashatru around the early to mid 5th century BC. Greek and Roman catapults The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially "the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult (katapeltikon) by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the gastraphetes, which could store more energy than the Greek bows. A detailed description of the gastraphetes, or the "belly-bow", along with a watercolor drawing, is found in Heron's technical treatise Belopoeica. A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the gastraphetes, which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once. Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics (belos = "projectile"; poietike = "(art) of making") circa 200 BC. The central principle to this theory was that "all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises. From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines (katapaltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by "a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges. The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships. Other ancient catapults In chronological order: 19th century BC, Egypt, walls of the fortress of Buhen appear to contain platforms for siege weapons. c.750 BC, Judah, King Uzziah is documented as having overseen the construction of machines to "shoot great stones". between 484 and 468 BC, India, Ajatashatru is recorded in Jaina texts as having used catapults in his campaign against the Licchavis. between 500 and 300 BC, China, recorded use of mangonels. They were probably used by the Mohists as early as the 4th century BC, descriptions of which can be found in the Mojing (compiled in the 4th century BC). In Chapter 14 of the Mojing, the mangonel is described hurling hollowed out logs filled with burning charcoal at enemy troops. The mangonel was carried westward by the Avars and appeared next in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager due to its simpler design and faster rate of fire. The Byzantines adopted the mangonel possibly as early as 587, the Persians in the early 7th century, and the Arabs in the second half of the 7th century. The Franks and Saxons adopted the weapon in the 8th century. Medieval catapults Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls. Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (AD 885–6) "saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults", to little effect, resulting in failure. The most widely used catapults throughout the Middle Ages were as follows: Ballista Ballistae were similar to giant crossbows and were designed to work through torsion. The projectiles were large arrows or darts made from wood with an iron tip. These arrows were then shot "along a flat trajectory" at a target. Ballistae were accurate, but lacked firepower compared with that of a mangonel or trebuchet. Because of their immobility, most ballistae were constructed on site following a siege assessment by the commanding military officer. Springald The springald's design resembles that of the ballista, being a crossbow powered by tension. The springald's frame was more compact, allowing for use inside tighter confines, such as the inside of a castle or tower, but compromising its power. Mangonel This machine was designed to throw heavy projectiles from a "bowl-shaped bucket at the end of its arm". Mangonels were mostly used for “firing various missiles at fortresses, castles, and cities,” with a range of up to . These missiles included anything from stones to excrement to rotting carcasses. Mangonels were relatively simple to construct, and eventually wheels were added to increase mobility. Onager Mangonels are also sometimes referred to as Onagers. Onager catapults initially launched projectiles from a sling, which was later changed to a "bowl-shaped bucket". The word Onager is derived from the Greek word onagros for "wild ass", referring to the "kicking motion and force" that were recreated in the Mangonel's design. Historical records regarding onagers are scarce. The most detailed account of Mangonel use is from "Eric Marsden's translation of a text written by Ammianus Marcellius in the 4th Century AD" describing its construction and combat usage. Trebuchet Trebuchets were probably the most powerful catapult employed in the Middle Ages. The most commonly used ammunition were stones, but "darts and sharp wooden poles" could be substituted if necessary. The most effective kind of ammunition though involved fire, such as "firebrands, and deadly Greek Fire". Trebuchets came in two different designs: Traction, which were powered by people, or Counterpoise, where the people were replaced with "a weight on the short end". The most famous historical account of trebuchet use dates back to the siege of Stirling Castle in 1304, when the army of Edward I constructed a giant trebuchet known as Warwolf, which then proceeded to "level a section of [castle] wall, successfully concluding the siege". Couillard A simplified trebuchet, where the trebuchet's single counterweight is split, swinging on either side of a central support post. Leonardo da Vinci's catapult Leonardo da Vinci sought to improve the efficiency and range of earlier designs. His design incorporated a large wooden leaf spring as an accumulator to power the catapult. Both ends of the bow are connected by a rope, similar to the design of a bow and arrow. The leaf spring was not used to pull the catapult armature directly, rather the rope was wound around a drum. The catapult armature was attached to this drum which would be turned until enough potential energy was stored in the deformation of the spring. The drum would then be disengaged from the winding mechanism, and the catapult arm would snap around. Though no records exist of this design being built during Leonardo's lifetime, contemporary enthusiasts have reconstructed it. Modern use Military The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars. The SPBG (Silent Projector of Bottles and Grenades) was a Soviet proposal for an anti-tank weapon that launched grenades from a spring-loaded shuttle up to . Special variants called aircraft catapults are used to launch planes from land bases and sea carriers when the takeoff runway is too short for a powered takeoff or simply impractical to extend. Ships also use them to launch torpedoes and deploy bombs against submarines. In 2024, during the Israel-Hamas war, a trebuchet created by private initiative of an IDF reserve unit was used to throw firebrands over the border into Lebanon, in order to set on fire the undergrowth which offered camouflage to Hezbollah fighters. Toys, sports, entertainment In the 1840s, the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the United States. Small catapults, referred to as "traps", are still widely used to launch clay targets into the air in the sport of clay pigeon shooting. In the 1990s and early 2000s, a powerful catapult, a trebuchet, was used by thrill-seekers first on private property and in 2001–2002 at Middlemoor Water Park, Somerset, England, to experience being catapulted through the air for . The practice has been discontinued due to a fatality at the Water Park. There had been an injury when the trebuchet was in use on private property. Injury and death occurred when those two participants failed to land onto the safety net. The operators of the trebuchet were tried, but found not guilty of manslaughter, though the jury noted that the fatality might have been avoided had the operators "imposed stricter safety measures." Human cannonball circus acts use a catapult launch mechanism, rather than gunpowder, and are risky ventures for the human cannonballs. Early launched roller coasters used a catapult system powered by a diesel engine or a dropped weight to acquire their momentum, such as Shuttle Loop installations between 1977 and 1978. The catapult system for roller coasters has been replaced by flywheels and later linear motors. Pumpkin chunking is another widely popularized use, in which people compete to see who can launch a pumpkin the farthest by mechanical means (although the world record is held by a pneumatic air cannon). Smuggling In January 2011, a homemade catapult was discovered that was used to smuggle cannabis into the United States from Mexico. The machine was found from the border fence with bales of cannabis ready to launch.
Technology
Artillery and siege
null
7077
https://en.wikipedia.org/wiki/Computer%20file
Computer file
A computer file is a resource for recording data on a computer storage device, primarily identified by its filename. Just as words can be written on paper, so too can data be written to a computer file. Files can be shared with and transferred between computers and mobile devices via removable media, networks, or the Internet. Different types of computer files are designed for different purposes. A file may be designed to store a written message, a document, a spreadsheet, an image, a video, a program, or any wide variety of other kinds of data. Certain files can store multiple data types at once. By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and copied an arbitrary number of times. Files are typically organized in a file system, which tracks file locations on the disk and enables user access. Etymology The word "file" derives from the Latin filum ("a thread, string"). "File" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie. He used it for building a table from successive differences, and for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards." In February 1950, in a Radio Corporation of America (RCA) advertisement in Popular Science magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept 'on file' and taken out again. Such a 'file' now exists in a 'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones – speeds intelligent solutions through mazes of mathematics." In 1952, "file" denoted, among other things, information stored on punched cards. In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files". The introduction, , by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word. Although the contemporary "register file" demonstrates the early concept of files, its use has greatly decreased. File contents On most modern operating systems, files are organized into one-dimensional arrays of bytes. The format of a file is defined by its content since a file is solely a container for data. On some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file ( in Windows) are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to carry some basic information about itself. Some file systems can store arbitrary (not interpreted by the file system) file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are more susceptible to loss of metadata than container and archive file formats. File size At any instant in time, a file has a specific size, normally expressed as a number of bytes, that indicates how much storage is occupied by the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count (e.g., CP/M used a special control character, Ctrl-Z, to signal the end of text files). The general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file; these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents (the results of aborted disk operations). For example, the file to which the link points in a typical Unix-like system probably has a defined size that seldom changes. Compare this with which is also a file, but as a character special file, its size is not meaningful. Organization of data in a file Information in a computer file can consist of smaller packets of information (often called "records" or "lines") that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details; each record in the payroll file concerns just one employee, and all the records have the common trait of being related to payroll—this is very similar to placing all payroll information into a specific filing cabinet in an office that does not have a computer. A text file may contain lines of text, corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image (a blob) or it may contain an executable. The way information is grouped into a file is entirely up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis. The programmers who create the programs decide what files are needed, how they are to be used and (often) their names. In some cases, computer programs manipulate files that are made visible to the computer user. For example, in a word-processing program, the user manipulates document files that the user personally names. Although the content of the document file is arranged in a format that the word-processing program understands, the user is able to choose the name and location of the file and provide the bulk of the information (such as words and text) that will be stored in the file. Many applications pack all their data files into a single file called an archive file, using internal markers to discern the different types of information contained within. The benefits of the archive file are to lower the number of files for easier transfer, to reduce storage usage, or just to organize outdated files. The archive file must often be unpacked before next using. File operations The most basic operations that programs can perform on a file are: Create a new file Change the access permissions and attributes of a file Open a file, which makes the file contents available to the program Read data from a file Write data to a file Delete a file Close a file, terminating the association between it and the program Truncate a file, shortening it to a specified size within the file system without rewriting any content Allocate space to a file without writing any content. Not supported by some file systems. Files on a computer can be created, moved, modified, grown, shrunk (truncated), and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers) or by command lines (CLI). In Unix-like systems, user space programs do not operate directly, at a low level, on a file. Only the kernel deals with files, and it handles all user-space interaction with files in a manner that is transparent to the user-space programs. The operating system provides a level of abstraction, which means that interaction with a file from user-space is simply through its filename (instead of its inode). For example, rm filename will not delete the file itself, but only a link to the file. There can be many links to a file, but when they are all removed, the kernel considers that file's memory space free to be reallocated. This free space is commonly considered a security risk (due to the existence of file recovery software). Any secure-deletion program uses kernel-space (system) functions to wipe the file's data. File moves within a file system complete almost immediately because the data content does not need to be rewritten. Only the paths need to be changed. Moving methods There are two distinct implementations of file moves. When moving files between devices or partitions, some file managing software deletes each selected file from the source directory individually after being transferred, while other software deletes all files at once only after every file has been transferred. With the mv command for instance, the former method is used when selecting files individually, possibly with the use of wildcards (example: mv -n sourcePath/* targetPath, while the latter method is used when selecting entire directories (example: mv -n sourcePath targetPath). Microsoft Windows Explorer uses the former method for mass storage file moves, but the latter method using Media Transfer Protocol, as described in . The former method (individual deletion from source) has the benefit that space is released from the source device or partition imminently after the transfer has begun, meaning after the first file is finished. With the latter method, space is only freed after the transfer of the entire selection has finished. If an incomplete file transfer with the latter method is aborted unexpectedly, perhaps due to an unexpected power-off, system halt or disconnection of a device, no space will have been freed up on the source device or partition. The user would need to merge the remaining files from the source, including the incompletely written (truncated) last file. With the individual deletion method, the file moving software also does not need to cumulatively keep track of all files finished transferring for the case that a user manually aborts the file transfer. A file manager using the latter (afterwards deletion) method will have to only delete the files from the source directory that have already finished transferring. Identifying and organizing In modern computer systems, files are typically accessed using names (filenames). In some operating systems, the name is associated with the file itself. In others, the file is anonymous, and is pointed to by links that have names. In the latter case, a user can identify the name of the link with the file itself, but this is a false analogue, especially where there exists more than one link to the same file. Files (or links to files) can be located in directories. However, more generally, a directory can contain either a list of files or a list of links to files. Within this definition, it is of paramount importance that the term "file" includes directories. This permits the existence of directory hierarchies, i.e., directories containing sub-directories. A name that refers to a file within a directory must be typically unique. In other words, there must be no identical names within a directory. However, in some operating systems, a name may include a specification of type that means a directory can contain an identical name for more than one type of object such as a directory and a file. In environments in which a file is named, a file's name and the path to the file's directory must uniquely identify it among all other files in the computer system—no two files can have the same name and path. Where a file is anonymous, named references to it will exist within a namespace. In most cases, any name within the namespace will refer to exactly zero or one file. However, any file may be represented within any namespace by zero, one or more names. Any string of characters may be a well-formed name for a file or a link depending upon the context of application. Whether or not a name is well-formed depends on the type of computer system being used. Early computers permitted only a few letters or digits in the name of a file, but modern computers allow long names (some up to 255 characters) containing almost any combination of Unicode letters or Unicode digits, making it easier to understand the purpose of a file at a glance. Some computer systems allow file names to contain spaces; others do not. Case-sensitivity of file names is determined by the file system. Unix file systems are usually case sensitive and allow user-level applications to create files whose names differ only in the case of characters. Microsoft Windows supports multiple file systems, each with different policies regarding case-sensitivity. The common FAT file system can have multiple files whose names differ only in case if the user uses a disk editor to edit the file names in the directory entries. User applications, however, will usually not allow the user to create multiple files with the same name but differing in case. Most computers organize files into hierarchies using folders, directories, or catalogs. The concept is the same irrespective of the terminology used. Each folder can contain an arbitrary number of files, and it can also contain other folders. These other folders are referred to as subfolders. Subfolders can contain still more files and folders and so on, thus building a tree-like structure in which one "master folder" (or "root folder" — the name varies from one operating system to another) can contain any number of levels of other folders and files. Folders can be named just as files can (except for the root folder, which often does not have a name). The use of folders makes it easier to organize files in a logical way. When a computer allows the use of folders, each file and folder has not only a name of its own, but also a path, which identifies the folder or folders in which a file or folder resides. In the path, some sort of special character—such as a slash—is used to separate the file and folder names. For example, in the illustration shown in this article, the path uniquely identifies a file called in a folder called , which in turn is contained in a folder called . The folder and file names are separated by slashes in this example; the topmost or root folder has no name, and so the path begins with a slash (if the root folder had a name, it would precede this first slash). Many computer systems use extensions in file names to help identify what they contain, also known as the file type. On Windows computers, extensions consist of a dot (period) at the end of a file name, followed by a few letters to identify the type of file. An extension of identifies a text file; a extension identifies any type of document or documentation, commonly in the Microsoft Word file format; and so on. Even when extensions are used in a computer system, the degree to which the computer system recognizes and heeds them can vary; in some systems, they are required, while in other systems, they are completely ignored if they are presented. Protection Many modern computer systems provide methods for protecting files against accidental and deliberate damage. Computers that allow for multiple users implement file permissions to control who may or may not modify, delete, or create files and folders. For example, a given user may be granted only permission to read a file or folder, but not to modify or delete it; or a user may be given permission to read and modify files or folders, but not to execute them. Permissions may also be used to allow only certain users to see the contents of a file or folder. Permissions protect against unauthorized tampering or destruction of information in files, and keep private information confidential from unauthorized users. Another protection mechanism implemented in many computers is a read-only flag. When this flag is turned on for a file (which can be accomplished by a computer program or by a human user), the file can be examined, but it cannot be modified. This flag is useful for critical information that must not be modified or erased, such as special files that are used only by internal parts of the computer system. Some systems also include a hidden flag to make certain files invisible; this flag is used by the computer system to hide essential system files that users should not alter. Storage Any file that has any useful purpose must have some physical manifestation. That is, a file (an abstract concept) in a real computer system must have a real physical analogue if it is to exist at all. In physical terms, most computer files are stored on some type of data storage device. For example, most operating systems store files on a hard disk. Hard disks have been the ubiquitous form of non-volatile storage since the early 1960s. Where files contain only temporary information, they may be stored in RAM. Computer files can be also stored on other media in some cases, such as magnetic tapes, compact discs, Digital Versatile Discs, Zip drives, USB flash drives, etc. The use of solid state drives is also beginning to rival the hard disk drive. In Unix-like operating systems, many files have no associated physical storage device. Examples are and most files under directories , and . These are virtual files: they exist as objects within the operating system kernel. As seen by a running user program, files are usually represented either by a file control block or by a file handle. A file control block (FCB) is an area of memory which is manipulated to establish a filename etc. and then passed to the operating system as a parameter; it was used by older IBM operating systems and early PC operating systems including CP/M and early versions of MS-DOS. A file handle is generally either an opaque data type or an integer; it was introduced in around 1961 by the ALGOL-based Burroughs MCP running on the Burroughs B5000 but is now ubiquitous. File corruption When a file is said to be corrupted, it is because its contents have been saved to the computer in such a way that they cannot be properly read, either by a human or by software. Depending on the extent of the damage, the original file can sometimes be recovered, or at least partially understood. A file may be created corrupt, or it may be corrupted at a later point through overwriting. There are many ways by which a file can become corrupted. Most commonly, the issue happens in the process of writing the file to a disk. For example, if an image-editing program unexpectedly crashes while saving an image, that file may be corrupted because the program could not save its entirety. The program itself might warn the user that there was an error, allowing for another attempt at saving the file. Some other examples of reasons for which files become corrupted include: The computer itself shutting down unexpectedly (for example, due to a power loss) with open files, or files in the process of being saved; A download being interrupted before it was completed; Due to a bad sector on the hard drive; The user removing a flash drive (such as a USB stick) without properly unmounting (commonly referred to as "safely removing"); Malicious software, such as a computer virus; A flash drive becoming too old. Although file corruption usually happens accidentally, it may also be done on purpose as a mean of procrastination, as to fool someone else into thinking an assignment was ready at an earlier date, potentially gaining time to finish said assignment or making experiments, with the purpose of documenting the consequences when such file is corrupted. There are services that provide on demand file corruption, which essentially fill a given file with random data so that it cannot be opened or read, yet still seems legitimate. One of the most effective countermeasures for unintentional file corruption is backing up important files. In the event of an important file becoming corrupted, the user can simply replace it with the backed up version. Backup When computer files contain information that is extremely important, a back-up process is used to protect against disasters that might destroy the files. Backing up files simply means making copies of the files in a separate location so that they can be restored if something happens to the computer, or if they are deleted accidentally. There are many ways to back up files. Most computer systems provide utility programs to assist in the back-up process, which can become very time-consuming if there are many files to safeguard. Files are often copied to removable media such as writable CDs or cartridge tapes. Copying files to another hard disk in the same computer protects against failure of one disk, but if it is necessary to protect against failure or destruction of the entire computer, then copies of the files must be made on other media that can be taken away from the computer and stored in a safe, distant location. The grandfather-father-son backup method automatically makes three back-ups; the grandfather file is the oldest copy of the file and the son is the current copy. File systems and file managers The way a computer organizes, names, stores and manipulates files is globally referred to as its file system. Most computers have at least one file system. Some computers allow the use of several different file systems. For instance, on newer MS Windows computers, the older FAT-type file systems of MS-DOS and old versions of Windows are supported, in addition to the NTFS file system that is the normal file system for recent versions of Windows. Each system has its own advantages and disadvantages. Standard FAT allows only eight-character file names (plus a three-character extension) with no spaces, for example, whereas NTFS allows much longer names that can contain spaces. You can call a file "" in NTFS, but in FAT you would be restricted to something like (unless you were using VFAT, a FAT extension allowing long file names). File manager programs are utility programs that allow users to manipulate files directly. They allow you to move, create, delete and rename files and folders, although they do not actually allow you to read the contents of a file or store information in it. Every computer system provides at least one file-manager program for its native file system. For example, File Explorer (formerly Windows Explorer) is commonly used in Microsoft Windows operating systems, and Nautilus is common under several distributions of Linux.
Technology
Data storage
null
7143
https://en.wikipedia.org/wiki/Code-division%20multiple%20access
Code-division multiple access
Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (see bandwidth). To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme (where each transmitter is assigned a code). CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range. It is used as the access method in many mobile phone standards. IS-95, also called "cdmaOne", and its 3G evolution CDMA2000, are often simply referred to as "CDMA", but UMTS, the 3G standard used by GSM carriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such as AT&T, UScellular and Verizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to 911. It can be also used as a channel or medium access technology, like ALOHA for example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently. In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0es and 1es). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are called Barker codes (with a very short sequence length of typically 8 to 32). For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used with binary phase-shift keying (BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases) quadrature amplitude modulation (QAM) or orthogonal frequency-division multiplexing (OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based on binary offset carrier modulation (BOC modulation), which is inspired by Manchester codes and enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers. History The technology of code-division multiple access channels has long been known. United States In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on "The Security of Overseas Transport", which was a summer research project carried out at the Massachusetts Institute of Technology from June to August 1950. Further research in the context of jamming and anti-jamming was carried out in 1952 at Lincoln Lab. Soviet Union In the Soviet Union (USSR), the first work devoted to this subject was published in 1935 by Dmitry Ageev. It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory. The technology of CDMA was used in 1957, when the young military radio engineer Leonid Kupriyanovich in Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station. LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life. The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called "correlator." In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed . It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities. Uses Synchronous CDM (code-division 'multiplexing', an early generation of CDMA) was implemented in the Global Positioning System (GPS). This predates and is distinct from its use in mobile phones. The Qualcomm standard IS-95, marketed as cdmaOne. The Qualcomm standard IS-2000, known as CDMA2000, is used by several mobile phone companies, including the Globalstar network. The UMTS 3G mobile phone standard, which uses W-CDMA. CDMA has been used in the OmniTRACS satellite system for transportation logistics. Steps in CDMA modulation CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is a pseudo-random code in the time domain that has a narrow ambiguity function in the frequency domain, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwise XOR (exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration of (symbol period) is XORed with the code signal with pulse duration of (chip period). (Note: bandwidth is proportional to , where = bit time.) Therefore, the bandwidth of the data signal is and the bandwidth of the spread spectrum signal is . Since is much smaller than , the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratio is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station. Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference. An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived as noise and rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate. In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes). Code-division multiplexing (synchronous CDMA) The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced by Walsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually a Gilbert cell mixer in the circuitry. Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, the binary string 1011 is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components (for example, if u = (a, b) and v = (c, d), then their dot product u·v = ac + bd). If the dot product is zero, the two vectors are said to be orthogonal to each other. Some properties of the dot product aid understanding of how W-CDMA works. If vectors a and b are orthogonal, then and: Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded. Example Start with a set of vectors that are mutually orthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows from Walsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called the code, chip code, or chipping code. In the interest of brevity, the rest of this example uses codes v with only two bits. Each user is associated with a different code, say v. A 1 bit is represented by transmitting a positive code v, and a 0 bit is represented by a negative code −v. For example, if v = (v0, v1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be (v, −v, v, v) = (v0, v1, −v0, −v1, v0, v1, v0, v1) = (1, −1, −1, 1, 1, −1, 1, −1). For the purposes of this article, we call this constructed vector the transmitted vector. Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical. Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component. If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps: Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal (1, −1, −1, 1, 1, −1, 1, −1) + (−1, −1, −1, −1, 1, 1, 1, 1) = (0, −2, −2, 0, 2, 0, 2, 0). This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another: Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example: Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver: When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data. Asynchronous CDMA When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used in asynchronous CDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results in multiple access interference (MAI) that is approximated by a Gaussian noise process (following the central limit theorem in statistics). Gold codes are an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users. All forms of CDMA use the spread-spectrum spreading factor to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor. Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power. In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed. Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables. Advantages of asynchronous CDMA over other techniques Efficient practical utilization of the fixed frequency spectrum In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency. Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictable Doppler shift of the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum. Flexible allocation of resources Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2N users that only talk half of the time, then 2N users can be accommodated with the same average bit error probability as N users that talk all of the time. The key difference here is that the bit error probability for N users talking all of the time is constant, whereas it is a random quantity (with the same mean) for 2N users talking half of the time. In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are N time slots in a TDMA system and 2N users that talk half of the time, then half of the time there will be more than N users needing to use more than N time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system. Spread-spectrum characteristics of CDMA Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal. CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome. Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored. Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal. Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell. Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal. Collaborative CDMA A novel collaborative multi-user transmission and detection scheme called collaborative CDMA has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and high bit error rate performance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system.
Technology
Telecommunications
null
7158
https://en.wikipedia.org/wiki/Carat%20%28mass%29
Carat (mass)
The carat (ct) is a unit of mass equal to , which is used for measuring gemstones and pearls. The current definition, sometimes known as the metric carat, was adopted in 1907 at the Fourth General Conference on Weights and Measures, and soon afterwards in many countries around the world. The carat is divisible into 100 points of 2 mg. Other subdivisions, and slightly different mass values, have been used in the past in different locations. In terms of diamonds, a paragon is a flawless stone of at least 100 carats (20 g). The ANSI X.12 EDI standard abbreviation for the carat is CD. Etymology First attested in English in the mid-15th century, the word carat comes from Italian carato, which comes from Arabic (qīrāṭ; قيراط), in turn borrowed from Greek kerátion κεράτιον 'carob seed', a diminutive of keras 'horn'. It was a unit of weight, equal to 1/1728 (1/12) of a pound (see Mina (unit)). History Carob seeds have been used throughout history to measure jewelry, because it was believed that there was little variance in their mass distribution. However, this was a factual inaccuracy, as their mass varies about as much as seeds of other species. In the past, each country had its own carat. It was often used for weighing gold. Beginning in the 1570s, it was used to measure weights of diamonds. Standardization An 'international carat' of 205 milligrams was proposed in 1871 by the Syndical Chamber of Jewellers, etc., in Paris, and accepted in 1877 by the Syndical Chamber of Diamond Merchants in Paris. A metric carat of 200 milligrams is exactly one-fifth of a gram and had often been suggested in various countries, and was finally proposed by the International Committee of Weights and Measures, and unanimously accepted at the fourth sexennial General Conference of the Metric Convention held in Paris in October 1907. It was soon made compulsory by law in France, but uptake of the new carat was slower in England, where its use was allowed by the Weights and Measures (Metric System) Act of 1897. Historical definitions UK Board of Trade In the United Kingdom the original Board of Trade carat was exactly grains (~3.170 grains = ~205 mg); in 1888, the Board of Trade carat was changed to exactly grains (~3.168 grains = ~205 mg). Despite it being a non-metric unit, a number of metric countries have used this unit for its limited range of application. The Board of Trade carat was divisible into four diamond grains, but measurements were typically made in multiples of carat. Refiners' carats There were also two varieties of refiners' carats once used in the United Kingdom—the pound carat and the ounce carat. The pound troy was divisible into 24 pound carats of 240 grains troy each; the pound carat was divisible into four pound grains of 60 grains troy each; and the pound grain was divisible into four pound quarters of 15 grains troy each. Likewise, the ounce troy was divisible into 24 ounce carats of 20 grains troy each; the ounce carat was divisible into four ounce grains of 5 grains troy each; and the ounce grain was divisible into four ounce quarters of grains troy each. Greco-Roman The solidus was also a Roman weight unit. There is literary evidence that the weight of 72 coins of the type called solidus was exactly 1 Roman pound, and that the weight of 1 solidus was 24 siliquae. The weight of a Roman pound is generally believed to have been 327.45 g or possibly up to 5 g less. Therefore, the metric equivalent of 1 siliqua was approximately 189 mg. The Greeks had a similar unit of the same value. Gold fineness in carats comes from carats and grains of gold in a solidus of coin. The conversion rates 1 solidus = 24 carats, 1 carat = 4 grains still stand. Woolhouse's Measures, Weights and Moneys of All Nations gives gold fineness in carats of 4 grains, and silver in troy pounds of 12 troy ounces of 20 pennyweight each.
Physical sciences
Mass and weight
Basics and measurement
7163
https://en.wikipedia.org/wiki/Catenary
Catenary
In physics and geometry, a catenary ( , ) is the curve that an idealized hanging chain or cable assumes under its own weight when supported only at its ends in a uniform gravitational field. The catenary curve has a U-like shape, superficially similar in appearance to a parabola, which it is not. The curve appears in the design of certain types of arches and as a cross section of the catenoid—the shape assumed by a soap film bounded by two parallel circular rings. The catenary is also called the alysoid, chainette, or, particularly in the materials sciences, an example of a funicular. Rope statics describes catenaries in a classic statics problem involving a hanging rope. Mathematically, the catenary curve is the graph of the hyperbolic cosine function. The surface of revolution of the catenary curve, the catenoid, is a minimal surface, specifically a minimal surface of revolution. A hanging chain will assume a shape of least potential energy which is a catenary. Galileo Galilei in 1638 discussed the catenary in the book Two New Sciences recognizing that it was different from a parabola. The mathematical properties of the catenary curve were studied by Robert Hooke in the 1670s, and its equation was derived by Leibniz, Huygens and Johann Bernoulli in 1691. Catenaries and related curves are used in architecture and engineering (e.g., in the design of bridges and arches so that forces do not result in bending moments). In the offshore oil and gas industry, "catenary" refers to a steel catenary riser, a pipeline suspended between a production platform and the seabed that adopts an approximate catenary shape. In the rail industry it refers to the overhead wiring that transfers power to trains. (This often supports a contact wire, in which case it does not follow a true catenary curve.) In optics and electromagnetics, the hyperbolic cosine and sine functions are basic solutions to Maxwell's equations. The symmetric modes consisting of two evanescent waves would form a catenary shape. History The word "catenary" is derived from the Latin word catēna, which means "chain". The English word "catenary" is usually attributed to Thomas Jefferson, who wrote in a letter to Thomas Paine on the construction of an arch for a bridge: It is often said that Galileo thought the curve of a hanging chain was parabolic. However, in his Two New Sciences (1638), Galileo wrote that a hanging cord is only an approximate parabola, correctly observing that this approximation improves in accuracy as the curvature gets smaller and is almost exact when the elevation is less than 45°. The fact that the curve followed by a chain is not a parabola was proven by Joachim Jungius (1587–1657); this result was published posthumously in 1669. The application of the catenary to the construction of arches is attributed to Robert Hooke, whose "true mathematical and mechanical form" in the context of the rebuilding of St Paul's Cathedral alluded to a catenary. Some much older arches approximate catenaries, an example of which is the Arch of Taq-i Kisra in Ctesiphon. In 1671, Hooke announced to the Royal Society that he had solved the problem of the optimal shape of an arch, and in 1675 published an encrypted solution as a Latin anagram in an appendix to his Description of Helioscopes, where he wrote that he had found "a true mathematical and mechanical form of all manner of Arches for Building." He did not publish the solution to this anagram in his lifetime, but in 1705 his executor provided it as ut pendet continuum flexile, sic stabit contiguum rigidum inversum, meaning "As hangs a flexible cable so, inverted, stand the touching pieces of an arch." In 1691, Gottfried Leibniz, Christiaan Huygens, and Johann Bernoulli derived the equation in response to a challenge by Jakob Bernoulli; their solutions were published in the Acta Eruditorum for June 1691. David Gregory wrote a treatise on the catenary in 1697 in which he provided an incorrect derivation of the correct differential equation. Leonhard Euler proved in 1744 that the catenary is the curve which, when rotated about the -axis, gives the surface of minimum surface area (the catenoid) for the given bounding circles. Nicolas Fuss gave equations describing the equilibrium of a chain under any force in 1796. Inverted catenary arch Catenary arches are often used in the construction of kilns. To create the desired curve, the shape of a hanging chain of the desired dimensions is transferred to a form which is then used as a guide for the placement of bricks or other building material. The Gateway Arch in St. Louis, Missouri, United States is sometimes said to be an (inverted) catenary, but this is incorrect. It is close to a more general curve called a flattened catenary, with equation , which is a catenary if . While a catenary is the ideal shape for a freestanding arch of constant thickness, the Gateway Arch is narrower near the top. According to the U.S. National Historic Landmark nomination for the arch, it is a "weighted catenary" instead. Its shape corresponds to the shape that a weighted chain, having lighter links in the middle, would form. Catenary bridges In free-hanging chains, the force exerted is uniform with respect to length of the chain, and so the chain follows the catenary curve. The same is true of a simple suspension bridge or "catenary bridge," where the roadway follows the cable. A stressed ribbon bridge is a more sophisticated structure with the same catenary shape. However, in a suspension bridge with a suspended roadway, the chains or cables support the weight of the bridge, and so do not hang freely. In most cases the roadway is flat, so when the weight of the cable is negligible compared with the weight being supported, the force exerted is uniform with respect to horizontal distance, and the result is a parabola, as discussed below (although the term "catenary" is often still used, in an informal sense). If the cable is heavy then the resulting curve is between a catenary and a parabola. Anchoring of marine objects The catenary produced by gravity provides an advantage to heavy anchor rodes. An anchor rode (or anchor line) usually consists of chain or cable or both. Anchor rodes are used by ships, oil rigs, docks, floating wind turbines, and other marine equipment which must be anchored to the seabed. When the rope is slack, the catenary curve presents a lower angle of pull on the anchor or mooring device than would be the case if it were nearly straight. This enhances the performance of the anchor and raises the level of force it will resist before dragging. To maintain the catenary shape in the presence of wind, a heavy chain is needed, so that only larger ships in deeper water can rely on this effect. Smaller boats also rely on catenary to maintain maximum holding power. Cable ferries and chain boats present a special case of marine vehicles moving although moored by the two catenaries each of one or more cables (wire ropes or chains) passing through the vehicle and moved along by motorized sheaves. The catenaries can be evaluated graphically. Mathematical description Equation The equation of a catenary in Cartesian coordinates has the form where is the hyperbolic cosine function, and where is the distance of the lowest point above the x axis. All catenary curves are similar to each other, since changing the parameter is equivalent to a uniform scaling of the curve. The Whewell equation for the catenary is where is the tangential angle and the arc length. Differentiating gives and eliminating gives the Cesàro equation where is the curvature. The radius of curvature is then which is the length of the normal between the curve and the -axis. Relation to other curves When a parabola is rolled along a straight line, the roulette curve traced by its focus is a catenary. The envelope of the directrix of the parabola is also a catenary. The involute from the vertex, that is the roulette traced by a point starting at the vertex when a line is rolled on a catenary, is the tractrix. Another roulette, formed by rolling a line on a catenary, is another line. This implies that square wheels can roll perfectly smoothly on a road made of a series of bumps in the shape of an inverted catenary curve. The wheels can be any regular polygon except a triangle, but the catenary must have parameters corresponding to the shape and dimensions of the wheels. Geometrical properties Over any horizontal interval, the ratio of the area under the catenary to its length equals , independent of the interval selected. The catenary is the only plane curve other than a horizontal line with this property. Also, the geometric centroid of the area under a stretch of catenary is the midpoint of the perpendicular segment connecting the centroid of the curve itself and the -axis. Science A moving charge in a uniform electric field travels along a catenary (which tends to a parabola if the charge velocity is much less than the speed of light ). The surface of revolution with fixed radii at either end that has minimum surface area is a catenary revolved about the -axis. Analysis Model of chains and arches In the mathematical model the chain (or cord, cable, rope, string, etc.) is idealized by assuming that it is so thin that it can be regarded as a curve and that it is so flexible any force of tension exerted by the chain is parallel to the chain. The analysis of the curve for an optimal arch is similar except that the forces of tension become forces of compression and everything is inverted. An underlying principle is that the chain may be considered a rigid body once it has attained equilibrium. Equations which define the shape of the curve and the tension of the chain at each point may be derived by a careful inspection of the various forces acting on a segment using the fact that these forces must be in balance if the chain is in static equilibrium. Let the path followed by the chain be given parametrically by where represents arc length and is the position vector. This is the natural parameterization and has the property that where is a unit tangent vector. A differential equation for the curve may be derived as follows. Let be the lowest point on the chain, called the vertex of the catenary. The slope of the curve is zero at since it is a minimum point. Assume is to the right of since the other case is implied by symmetry. The forces acting on the section of the chain from to are the tension of the chain at , the tension of the chain at , and the weight of the chain. The tension at is tangent to the curve at and is therefore horizontal without any vertical component and it pulls the section to the left so it may be written where is the magnitude of the force. The tension at is parallel to the curve at and pulls the section to the right. The tension at can be split into two components so it may be written , where is the magnitude of the force and is the angle between the curve at and the -axis (see tangential angle). Finally, the weight of the chain is represented by where is the weight per unit length and is the length of the segment of chain between and . The chain is in equilibrium so the sum of three forces is , therefore and and dividing these gives It is convenient to write which is the length of chain whose weight is equal in magnitude to the tension at . Then is an equation defining the curve. The horizontal component of the tension, is constant and the vertical component of the tension, is proportional to the length of chain between and the vertex. Derivation of equations for the curve The differential equation , given above, can be solved to produce equations for the curve. We will solve the equation using the boundary condition that the vertex is positioned at and . First, invoke the formula for arc length to get then separate variables to obtain A reasonably straightforward approach to integrate this is to use hyperbolic substitution, which gives (where is a constant of integration), and hence But , so which integrates as (with being the constant of integration satisfying the boundary condition). Since the primary interest here is simply the shape of the curve, the placement of the coordinate axes are arbitrary; so make the convenient choice of to simplify the result to For completeness, the relation can be derived by solving each of the and relations for , giving: so which can be rewritten as Alternative derivation The differential equation can be solved using a different approach. From it follows that and Integrating gives, and As before, the and -axes can be shifted so and can be taken to be 0. Then and taking the reciprocal of both sides Adding and subtracting the last two equations then gives the solution and Determining parameters In general the parameter is the position of the axis. The equation can be determined in this case as follows: Relabel if necessary so that is to the left of and let be the horizontal and be the vertical distance from to . Translate the axes so that the vertex of the catenary lies on the -axis and its height is adjusted so the catenary satisfies the standard equation of the curve and let the coordinates of and be and respectively. The curve passes through these points, so the difference of height is and the length of the curve from to is When is expanded using these expressions the result is so This is a transcendental equation in and must be solved numerically. Since is strictly monotonic on , there is at most one solution with and so there is at most one position of equilibrium. However, if both ends of the curve ( and ) are at the same level (), it can be shown that where L is the total length of the curve between and and is the sag (vertical distance between , and the vertex of the curve). It can also be shown that and where H is the horizontal distance between and which are located at the same level (). The horizontal traction force at and is , where is the weight per unit length of the chain or cable. Tension relations There is a simple relationship between the tension in the cable at a point and its - and/or - coordinate. Begin by combining the squares of the vector components of the tension: which (recalling that ) can be rewritten as But, as shown above, (assuming that ), so we get the simple relations Variational formulation Consider a chain of length suspended from two points of equal height and at distance . The curve has to minimize its potential energy (where is the weight per unit length) and is subject to the constraint The modified Lagrangian is therefore where is the Lagrange multiplier to be determined. As the independent variable does not appear in the Lagrangian, we can use the Beltrami identity where is an integration constant, in order to obtain a first integral This is an ordinary first order differential equation that can be solved by the method of separation of variables. Its solution is the usual hyperbolic cosine where the parameters are obtained from the constraints. Generalizations with vertical force Nonuniform chains If the density of the chain is variable then the analysis above can be adapted to produce equations for the curve given the density, or given the curve to find the density. Let denote the weight per unit length of the chain, then the weight of the chain has magnitude where the limits of integration are and . Balancing forces as in the uniform chain produces and and therefore Differentiation then gives In terms of and the radius of curvature this becomes Suspension bridge curve A similar analysis can be done to find the curve followed by the cable supporting a suspension bridge with a horizontal roadway. If the weight of the roadway per unit length is and the weight of the cable and the wire supporting the bridge is negligible in comparison, then the weight on the cable (see the figure in Catenary#Model of chains and arches) from to is where is the horizontal distance between and . Proceeding as before gives the differential equation This is solved by simple integration to get and so the cable follows a parabola. If the weight of the cable and supporting wires is not negligible then the analysis is more complex. Catenary of equal strength In a catenary of equal strength, the cable is strengthened according to the magnitude of the tension at each point, so its resistance to breaking is constant along its length. Assuming that the strength of the cable is proportional to its density per unit length, the weight, , per unit length of the chain can be written , where is constant, and the analysis for nonuniform chains can be applied. In this case the equations for tension are Combining gives and by differentiation where is the radius of curvature. The solution to this is In this case, the curve has vertical asymptotes and this limits the span to . Other relations are The curve was studied 1826 by Davies Gilbert and, apparently independently, by Gaspard-Gustave Coriolis in 1836. Recently, it was shown that this type of catenary could act as a building block of electromagnetic metasurface and was known as "catenary of equal phase gradient". Elastic catenary In an elastic catenary, the chain is replaced by a spring which can stretch in response to tension. The spring is assumed to stretch in accordance with Hooke's Law. Specifically, if is the natural length of a section of spring, then the length of the spring with tension applied has length where is a constant equal to , where is the stiffness of the spring. In the catenary the value of is variable, but ratio remains valid at a local level, so The curve followed by an elastic spring can now be derived following a similar method as for the inelastic spring. The equations for tension of the spring are and from which where is the natural length of the segment from to and is the weight per unit length of the spring with no tension. Write so Then from which Integrating gives the parametric equations Again, the and -axes can be shifted so and can be taken to be 0. So are parametric equations for the curve. At the rigid limit where is large, the shape of the curve reduces to that of a non-elastic chain. Other generalizations Chain under a general force With no assumptions being made regarding the force acting on the chain, the following analysis can be made. First, let be the force of tension as a function of . The chain is flexible so it can only exert a force parallel to itself. Since tension is defined as the force that the chain exerts on itself, must be parallel to the chain. In other words, where is the magnitude of and is the unit tangent vector. Second, let be the external force per unit length acting on a small segment of a chain as a function of . The forces acting on the segment of the chain between and are the force of tension at one end of the segment, the nearly opposite force at the other end, and the external force acting on the segment which is approximately . These forces must balance so Divide by and take the limit as to obtain These equations can be used as the starting point in the analysis of a flexible chain acting under any external force. In the case of the standard catenary, where the chain has weight per unit length.
Mathematics
Two-dimensional space
null
7172
https://en.wikipedia.org/wiki/Chemotherapy
Chemotherapy
Chemotherapy (often abbreviated chemo, sometimes CTX and CTx) is the type of cancer treatment that uses one or more anti-cancer drugs (chemotherapeutic agents or alkylating agents) in a standard regimen. Chemotherapy may be given with a curative intent (which almost always involves combinations of drugs), or it may aim only to prolong life or to reduce symptoms (palliative chemotherapy). Chemotherapy is one of the major categories of the medical discipline specifically devoted to pharmacotherapy for cancer, which is called medical oncology. The term chemotherapy now means the non-specific use of intracellular poisons to inhibit mitosis (cell division) or to induce DNA damage (so that DNA repair can augment chemotherapy). This meaning excludes the more-selective agents that block extracellular signals (signal transduction). Therapies with specific molecular or genetic targets, which inhibit growth-promoting signals from classic endocrine hormones (primarily estrogens for breast cancer and androgens for prostate cancer), are now called hormonal therapies. Other inhibitions of growth-signals, such as those associated with receptor tyrosine kinases, are targeted therapy. The use of drugs (whether chemotherapy, hormonal therapy, or targeted therapy) is systemic therapy for cancer: they are introduced into the blood stream (the system) and therefore can treat cancer anywhere in the body. Systemic therapy is often used with other, local therapy (treatments that work only where they are applied), such as radiation, surgery, and hyperthermia. Traditional chemotherapeutic agents are cytotoxic by means of interfering with cell division (mitosis) but cancer cells vary widely in their susceptibility to these agents. To a large extent, chemotherapy can be thought of as a way to damage or stress cells, which may then lead to cell death if apoptosis is initiated. Many of the side effects of chemotherapy can be traced to damage to normal cells that divide rapidly and are thus sensitive to anti-mitotic drugs: cells in the bone marrow, digestive tract and hair follicles. This results in the most common side-effects of chemotherapy: myelosuppression (decreased production of blood cells, hence that also immunosuppression), mucositis (inflammation of the lining of the digestive tract), and alopecia (hair loss). Because of the effect on immune cells (especially lymphocytes), chemotherapy drugs often find use in a host of diseases that result from harmful overactivity of the immune system against self (so-called autoimmunity). These include rheumatoid arthritis, systemic lupus erythematosus, multiple sclerosis, vasculitis and many others. Treatment strategies There are a number of strategies in the administration of chemotherapeutic drugs used today. Chemotherapy may be given with a curative intent or it may aim to prolong life or to palliate symptoms. Induction chemotherapy is the first line treatment of cancer with a chemotherapeutic drug. This type of chemotherapy is used for curative intent. Combined modality chemotherapy is the use of drugs with other cancer treatments, such as surgery, radiation therapy, or hyperthermia therapy. Consolidation chemotherapy is given after remission in order to prolong the overall disease-free time and improve overall survival. The drug that is administered is the same as the drug that achieved remission. Intensification chemotherapy is identical to consolidation chemotherapy but a different drug than the induction chemotherapy is used. Combination chemotherapy involves treating a person with a number of different drugs simultaneously. The drugs differ in their mechanism and side-effects. The biggest advantage is minimising the chances of resistance developing to any one agent. Also, the drugs can often be used at lower doses, reducing toxicity. Neoadjuvant chemotherapy is given prior to a local treatment such as surgery, and is designed to shrink the primary tumor. It is also given for cancers with a high risk of micrometastatic disease. Adjuvant chemotherapy is given after a local treatment (radiotherapy or surgery). It can be used when there is little evidence of cancer present, but there is risk of recurrence. It is also useful in killing any cancerous cells that have spread to other parts of the body. These micrometastases can be treated with adjuvant chemotherapy and can reduce relapse rates caused by these disseminated cells. Maintenance chemotherapy is a repeated low-dose treatment to prolong remission. Salvage chemotherapy or palliative chemotherapy is given without curative intent, but simply to decrease tumor load and increase life expectancy. For these regimens, in general, a better toxicity profile is expected. All chemotherapy regimens require that the recipient be capable of undergoing the treatment. Performance status is often used as a measure to determine whether a person can receive chemotherapy, or whether dose reduction is required. Because only a fraction of the cells in a tumor die with each treatment (fractional kill), repeated doses must be administered to continue to reduce the size of the tumor. Current chemotherapy regimens apply drug treatment in cycles, with the frequency and duration of treatments limited by toxicity. Effectiveness The effectiveness of chemotherapy depends on the type of cancer and the stage. The overall effectiveness ranges from being curative for some cancers, such as some leukemias, to being ineffective, such as in some brain tumors, to being needless in others, like most non-melanoma skin cancers. Dosage Dosage of chemotherapy can be difficult: If the dose is too low, it will be ineffective against the tumor, whereas, at excessive doses, the toxicity (side-effects) will be intolerable to the person receiving it. The standard method of determining chemotherapy dosage is based on calculated body surface area (BSA). The BSA is usually calculated with a mathematical formula or a nomogram, using the recipient's weight and height, rather than by direct measurement of body area. This formula was originally derived in a 1916 study and attempted to translate medicinal doses established with laboratory animals to equivalent doses for humans. The study only included nine human subjects. When chemotherapy was introduced in the 1950s, the BSA formula was adopted as the official standard for chemotherapy dosing for lack of a better option. The validity of this method in calculating uniform doses has been questioned because the formula only takes into account the individual's weight and height. Drug absorption and clearance are influenced by multiple factors, including age, sex, metabolism, disease state, organ function, drug-to-drug interactions, genetics, and obesity, which have major impacts on the actual concentration of the drug in the person's bloodstream. As a result, there is high variability in the systemic chemotherapy drug concentration in people dosed by BSA, and this variability has been demonstrated to be more than ten-fold for many drugs. In other words, if two people receive the same dose of a given drug based on BSA, the concentration of that drug in the bloodstream of one person may be 10 times higher or lower compared to that of the other person. This variability is typical with many chemotherapy drugs dosed by BSA, and, as shown below, was demonstrated in a study of 14 common chemotherapy drugs. The result of this pharmacokinetic variability among people is that many people do not receive the right dose to achieve optimal treatment effectiveness with minimized toxic side effects. Some people are overdosed while others are underdosed. For example, in a randomized clinical trial, investigators found 85% of metastatic colorectal cancer patients treated with 5-fluorouracil (5-FU) did not receive the optimal therapeutic dose when dosed by the BSA standard—68% were underdosed and 17% were overdosed. There has been controversy over the use of BSA to calculate chemotherapy doses for people who are obese. Because of their higher BSA, clinicians often arbitrarily reduce the dose prescribed by the BSA formula for fear of overdosing. In many cases, this can result in sub-optimal treatment. Several clinical studies have demonstrated that when chemotherapy dosing is individualized to achieve optimal systemic drug exposure, treatment outcomes are improved and toxic side effects are reduced. In the 5-FU clinical study cited above, people whose dose was adjusted to achieve a pre-determined target exposure realized an 84% improvement in treatment response rate and a six-month improvement in overall survival (OS) compared with those dosed by BSA. In the same study, investigators compared the incidence of common 5-FU-associated grade 3/4 toxicities between the dose-adjusted people and people dosed per BSA. The incidence of debilitating grades of diarrhea was reduced from 18% in the BSA-dosed group to 4% in the dose-adjusted group and serious hematologic side effects were eliminated. Because of the reduced toxicity, dose-adjusted patients were able to be treated for longer periods of time. BSA-dosed people were treated for a total of 680 months while people in the dose-adjusted group were treated for a total of 791 months. Completing the course of treatment is an important factor in achieving better treatment outcomes. Similar results were found in a study involving people with colorectal cancer who have been treated with the popular FOLFOX regimen. The incidence of serious diarrhea was reduced from 12% in the BSA-dosed group of patients to 1.7% in the dose-adjusted group, and the incidence of severe mucositis was reduced from 15% to 0.8%. The FOLFOX study also demonstrated an improvement in treatment outcomes. Positive response increased from 46% in the BSA-dosed group to 70% in the dose-adjusted group. Median progression free survival (PFS) and overall survival (OS) both improved by six months in the dose adjusted group. One approach that can help clinicians individualize chemotherapy dosing is to measure the drug levels in blood plasma over time and adjust dose according to a formula or algorithm to achieve optimal exposure. With an established target exposure for optimized treatment effectiveness with minimized toxicities, dosing can be personalized to achieve target exposure and optimal results for each person. Such an algorithm was used in the clinical trials cited above and resulted in significantly improved treatment outcomes. Oncologists are already individualizing dosing of some cancer drugs based on exposure. Carboplatin and busulfan dosing rely upon results from blood tests to calculate the optimal dose for each person. Simple blood tests are also available for dose optimization of methotrexate, 5-FU, paclitaxel, and docetaxel. The serum albumin level immediately prior to chemotherapy administration is an independent prognostic predictor of survival in various cancer types. Types Alkylating agents Alkylating agents are the oldest group of chemotherapeutics in use today. Originally derived from mustard gas used in World War I, there are now many types of alkylating agents in use. They are so named because of their ability to alkylate many molecules, including proteins, RNA and DNA. This ability to bind covalently to DNA via their alkyl group is the primary cause for their anti-cancer effects. DNA is made of two strands and the molecules may either bind twice to one strand of DNA (intrastrand crosslink) or may bind once to both strands (interstrand crosslink). If the cell tries to replicate crosslinked DNA during cell division, or tries to repair it, the DNA strands can break. This leads to a form of programmed cell death called apoptosis. Alkylating agents will work at any point in the cell cycle and thus are known as cell cycle-independent drugs. For this reason, the effect on the cell is dose dependent; the fraction of cells that die is directly proportional to the dose of drug. The subtypes of alkylating agents are the nitrogen mustards, nitrosoureas, tetrazines, aziridines, cisplatins and derivatives, and non-classical alkylating agents. Nitrogen mustards include mechlorethamine, cyclophosphamide, melphalan, chlorambucil, ifosfamide and busulfan. Nitrosoureas include N-Nitroso-N-methylurea (MNU), carmustine (BCNU), lomustine (CCNU) and semustine (MeCCNU), fotemustine and streptozotocin. Tetrazines include dacarbazine, mitozolomide and temozolomide. Aziridines include thiotepa, mytomycin and diaziquone (AZQ). Cisplatin and derivatives include cisplatin, carboplatin and oxaliplatin. They impair cell function by forming covalent bonds with the amino, carboxyl, sulfhydryl, and phosphate groups in biologically important molecules. Non-classical alkylating agents include procarbazine and hexamethylmelamine. Antimetabolites Anti-metabolites are a group of molecules that impede DNA and RNA synthesis. Many of them have a similar structure to the building blocks of DNA and RNA. The building blocks are nucleotides; a molecule comprising a nucleobase, a sugar and a phosphate group. The nucleobases are divided into purines (guanine and adenine) and pyrimidines (cytosine, thymine and uracil). Anti-metabolites resemble either nucleobases or nucleosides (a nucleotide without the phosphate group), but have altered chemical groups. These drugs exert their effect by either blocking the enzymes required for DNA synthesis or becoming incorporated into DNA or RNA. By inhibiting the enzymes involved in DNA synthesis, they prevent mitosis because the DNA cannot duplicate itself. Also, after misincorporation of the molecules into DNA, DNA damage can occur and programmed cell death (apoptosis) is induced. Unlike alkylating agents, anti-metabolites are cell cycle dependent. This means that they only work during a specific part of the cell cycle, in this case S-phase (the DNA synthesis phase). For this reason, at a certain dose, the effect plateaus and proportionally no more cell death occurs with increased doses. Subtypes of the anti-metabolites are the anti-folates, fluoropyrimidines, deoxynucleoside analogues and thiopurines. The anti-folates include methotrexate and pemetrexed. Methotrexate inhibits dihydrofolate reductase (DHFR), an enzyme that regenerates tetrahydrofolate from dihydrofolate. When the enzyme is inhibited by methotrexate, the cellular levels of folate coenzymes diminish. These are required for thymidylate and purine production, which are both essential for DNA synthesis and cell division. Pemetrexed is another anti-metabolite that affects purine and pyrimidine production, and therefore also inhibits DNA synthesis. It primarily inhibits the enzyme thymidylate synthase, but also has effects on DHFR, aminoimidazole carboxamide ribonucleotide formyltransferase and glycinamide ribonucleotide formyltransferase. The fluoropyrimidines include fluorouracil and capecitabine. Fluorouracil is a nucleobase analogue that is metabolised in cells to form at least two active products; 5-fluourouridine monophosphate (FUMP) and 5-fluoro-2'-deoxyuridine 5'-phosphate (fdUMP). FUMP becomes incorporated into RNA and fdUMP inhibits the enzyme thymidylate synthase; both of which lead to cell death. Capecitabine is a prodrug of 5-fluorouracil that is broken down in cells to produce the active drug. The deoxynucleoside analogues include cytarabine, gemcitabine, decitabine, azacitidine, fludarabine, nelarabine, cladribine, clofarabine, and pentostatin. The thiopurines include thioguanine and mercaptopurine. Anti-microtubule agents Anti-microtubule agents are plant-derived chemicals that block cell division by preventing microtubule function. Microtubules are an important cellular structure composed of two proteins, α-tubulin and β-tubulin. They are hollow, rod-shaped structures that are required for cell division, among other cellular functions. Microtubules are dynamic structures, which means that they are permanently in a state of assembly and disassembly. Vinca alkaloids and taxanes are the two main groups of anti-microtubule agents, and although both of these groups of drugs cause microtubule dysfunction, their mechanisms of action are completely opposite: Vinca alkaloids prevent the assembly of microtubules, whereas taxanes prevent their disassembly. By doing so, they can induce mitotic catastrophe in the cancer cells. Following this, cell cycle arrest occurs, which induces programmed cell death (apoptosis). These drugs can also affect blood vessel growth, an essential process that tumours utilise in order to grow and metastasise. Vinca alkaloids are derived from the Madagascar periwinkle, Catharanthus roseus, formerly known as Vinca rosea. They bind to specific sites on tubulin, inhibiting the assembly of tubulin into microtubules. The original vinca alkaloids are natural products that include vincristine and vinblastine. Following the success of these drugs, semi-synthetic vinca alkaloids were produced: vinorelbine (used in the treatment of non-small-cell lung cancer), vindesine, and vinflunine. These drugs are cell cycle-specific. They bind to the tubulin molecules in S-phase and prevent proper microtubule formation required for M-phase. Taxanes are natural and semi-synthetic drugs. The first drug of their class, paclitaxel, was originally extracted from Taxus brevifolia, the Pacific yew. Now this drug and another in this class, docetaxel, are produced semi-synthetically from a chemical found in the bark of another yew tree, Taxus baccata. Podophyllotoxin is an antineoplastic lignan obtained primarily from the American mayapple (Podophyllum peltatum) and Himalayan mayapple (Sinopodophyllum hexandrum). It has anti-microtubule activity, and its mechanism is similar to that of vinca alkaloids in that they bind to tubulin, inhibiting microtubule formation. Podophyllotoxin is used to produce two other drugs with different mechanisms of action: etoposide and teniposide. Topoisomerase inhibitors Topoisomerase inhibitors are drugs that affect the activity of two enzymes: topoisomerase I and topoisomerase II. When the DNA double-strand helix is unwound, during DNA replication or transcription, for example, the adjacent unopened DNA winds tighter (supercoils), like opening the middle of a twisted rope. The stress caused by this effect is in part aided by the topoisomerase enzymes. They produce single- or double-strand breaks into DNA, reducing the tension in the DNA strand. This allows the normal unwinding of DNA to occur during replication or transcription. Inhibition of topoisomerase I or II interferes with both of these processes. Two topoisomerase I inhibitors, irinotecan and topotecan, are semi-synthetically derived from camptothecin, which is obtained from the Chinese ornamental tree Camptotheca acuminata. Drugs that target topoisomerase II can be divided into two groups. The topoisomerase II poisons cause increased levels enzymes bound to DNA. This prevents DNA replication and transcription, causes DNA strand breaks, and leads to programmed cell death (apoptosis). These agents include etoposide, doxorubicin, mitoxantrone and teniposide. The second group, catalytic inhibitors, are drugs that block the activity of topoisomerase II, and therefore prevent DNA synthesis and translation because the DNA cannot unwind properly. This group includes novobiocin, merbarone, and aclarubicin, which also have other significant mechanisms of action. Cytotoxic antibiotics The cytotoxic antibiotics are a varied group of drugs that have various mechanisms of action. The common theme that they share in their chemotherapy indication is that they interrupt cell division. The most important subgroup is the anthracyclines and the bleomycins; other prominent examples include mitomycin C and actinomycin. Among the anthracyclines, doxorubicin and daunorubicin were the first, and were obtained from the bacterium Streptomyces peucetius. Derivatives of these compounds include epirubicin and idarubicin. Other clinically used drugs in the anthracycline group are pirarubicin, aclarubicin, and mitoxantrone. The mechanisms of anthracyclines include DNA intercalation (molecules insert between the two strands of DNA), generation of highly reactive free radicals that damage intercellular molecules and topoisomerase inhibition. Actinomycin is a complex molecule that intercalates DNA and prevents RNA synthesis. Bleomycin, a glycopeptide isolated from Streptomyces verticillus, also intercalates DNA, but produces free radicals that damage DNA. This occurs when bleomycin binds to a metal ion, becomes chemically reduced and reacts with oxygen. Mitomycin is a cytotoxic antibiotic with the ability to alkylate DNA. Delivery Most chemotherapy is delivered intravenously, although a number of agents can be administered orally (e.g., melphalan, busulfan, capecitabine). According to a recent (2016) systematic review, oral therapies present additional challenges for patients and care teams to maintain and support adherence to treatment plans. There are many intravenous methods of drug delivery, known as vascular access devices. These include the winged infusion device, peripheral venous catheter, midline catheter, peripherally inserted central catheter (PICC), central venous catheter and implantable port. The devices have different applications regarding duration of chemotherapy treatment, method of delivery and types of chemotherapeutic agent. Depending on the person, the cancer, the stage of cancer, the type of chemotherapy, and the dosage, intravenous chemotherapy may be given on either an inpatient or an outpatient basis. For continuous, frequent or prolonged intravenous chemotherapy administration, various systems may be surgically inserted into the vasculature to maintain access. Commonly used systems are the Hickman line, the Port-a-Cath, and the PICC line. These have a lower infection risk, are much less prone to phlebitis or extravasation, and eliminate the need for repeated insertion of peripheral cannulae. Isolated limb perfusion (often used in melanoma), or isolated infusion of chemotherapy into the liver or the lung have been used to treat some tumors. The main purpose of these approaches is to deliver a very high dose of chemotherapy to tumor sites without causing overwhelming systemic damage. These approaches can help control solitary or limited metastases, but they are by definition not systemic, and, therefore, do not treat distributed metastases or micrometastases. Topical chemotherapies, such as 5-fluorouracil, are used to treat some cases of non-melanoma skin cancer. If the cancer has central nervous system involvement, or with meningeal disease, intrathecal chemotherapy may be administered. Adverse effects Chemotherapeutic techniques have a range of side effects that depend on the type of medications used. The most common medications affect mainly the fast-dividing cells of the body, such as blood cells and the cells lining the mouth, stomach, and intestines. Chemotherapy-related iatrogenic toxicities can occur acutely after administration, within hours or days, or chronically, from weeks to years. Immunosuppression and myelosuppression Virtually all chemotherapeutic regimens can cause depression of the immune system, often by paralysing the bone marrow and leading to a decrease of white blood cells, red blood cells, and platelets. Anemia and thrombocytopenia may require blood transfusion. Neutropenia (a decrease of the neutrophil granulocyte count below 0.5 x 109/litre) can be improved with synthetic G-CSF (granulocyte-colony-stimulating factor, e.g., filgrastim, lenograstim, efbemalenograstim alfa). In very severe myelosuppression, which occurs in some regimens, almost all the bone marrow stem cells (cells that produce white and red blood cells) are destroyed, meaning allogenic or autologous bone marrow cell transplants are necessary. (In autologous BMTs, cells are removed from the person before the treatment, multiplied and then re-injected afterward; in allogenic BMTs, the source is a donor.) However, some people still develop diseases because of this interference with bone marrow. Although people receiving chemotherapy are encouraged to wash their hands, avoid sick people, and take other infection-reducing steps, about 85% of infections are due to naturally occurring microorganisms in the person's own gastrointestinal tract (including oral cavity) and skin. This may manifest as systemic infections, such as sepsis, or as localized outbreaks, such as Herpes simplex, shingles, or other members of the Herpesviridea. The risk of illness and death can be reduced by taking common antibiotics such as quinolones or trimethoprim/sulfamethoxazole before any fever or sign of infection appears. Quinolones show effective prophylaxis mainly with hematological cancer. However, in general, for every five people who are immunosuppressed following chemotherapy who take an antibiotic, one fever can be prevented; for every 34 who take an antibiotic, one death can be prevented. Sometimes, chemotherapy treatments are postponed because the immune system is suppressed to a critically low level. In Japan, the government has approved the use of some medicinal mushrooms like Trametes versicolor, to counteract depression of the immune system in people undergoing chemotherapy. Trilaciclib is an inhibitor of cyclin-dependent kinase 4/6 approved for the prevention of myelosuppression caused by chemotherapy. The drug is given before chemotherapy to protect bone marrow function. Neutropenic enterocolitis Due to immune system suppression, neutropenic enterocolitis (typhlitis) is a "life-threatening gastrointestinal complication of chemotherapy." Typhlitis is an intestinal infection which may manifest itself through symptoms including nausea, vomiting, diarrhea, a distended abdomen, fever, chills, or abdominal pain and tenderness. Typhlitis is a medical emergency. It has a very poor prognosis and is often fatal unless promptly recognized and aggressively treated. Successful treatment hinges on early diagnosis provided by a high index of suspicion and the use of CT scanning, nonoperative treatment for uncomplicated cases, and sometimes elective right hemicolectomy to prevent recurrence. Gastrointestinal distress Nausea, vomiting, anorexia, diarrhea, abdominal cramps, and constipation are common side-effects of chemotherapeutic medications that kill fast-dividing cells. Malnutrition and dehydration can result when the recipient does not eat or drink enough, or when the person vomits frequently, because of gastrointestinal damage. This can result in rapid weight loss, or occasionally in weight gain, if the person eats too much in an effort to allay nausea or heartburn. Weight gain can also be caused by some steroid medications. These side-effects can frequently be reduced or eliminated with antiemetic drugs. Low-certainty evidence also suggests that probiotics may have a preventative and treatment effect of diarrhoea related to chemotherapy alone and with radiotherapy. However, a high index of suspicion is appropriate, since diarrhoea and bloating are also symptoms of typhlitis, a very serious and potentially life-threatening medical emergency that requires immediate treatment. Anemia Anemia can be a combined outcome caused by myelosuppressive chemotherapy, and possible cancer-related causes such as bleeding, blood cell destruction (hemolysis), hereditary disease, kidney dysfunction, nutritional deficiencies or anemia of chronic disease. Treatments to mitigate anemia include hormones to boost blood production (erythropoietin), iron supplements, and blood transfusions. Myelosuppressive therapy can cause a tendency to bleed easily, leading to anemia. Medications that kill rapidly dividing cells or blood cells can reduce the number of platelets in the blood, which can result in bruises and bleeding. Extremely low platelet counts may be temporarily boosted through platelet transfusions and new drugs to increase platelet counts during chemotherapy are being developed. Sometimes, chemotherapy treatments are postponed to allow platelet counts to recover. Fatigue may be a consequence of the cancer or its treatment, and can last for months to years after treatment. One physiological cause of fatigue is anemia, which can be caused by chemotherapy, surgery, radiotherapy, primary and metastatic disease or nutritional depletion. Aerobic exercise has been found to be beneficial in reducing fatigue in people with solid tumours. Nausea and vomiting Nausea and vomiting are two of the most feared cancer treatment-related side-effects for people with cancer and their families. In 1983, Coates et al. found that people receiving chemotherapy ranked nausea and vomiting as the first and second most severe side-effects, respectively. Up to 20% of people receiving highly emetogenic agents in this era postponed, or even refused potentially curative treatments. Chemotherapy-induced nausea and vomiting (CINV) are common with many treatments and some forms of cancer. Since the 1990s, several novel classes of antiemetics have been developed and commercialized, becoming a nearly universal standard in chemotherapy regimens, and helping to successfully manage these symptoms in many people. Effective mediation of these unpleasant and sometimes debilitating symptoms results in increased quality of life for the recipient and more efficient treatment cycles, as patients are less likely to avoid or refuse treatment. Hair loss Hair loss (alopecia) can be caused by chemotherapy that kills rapidly dividing cells; other medications may cause hair to thin. These are most often temporary effects: hair usually starts to regrow a few weeks after the last treatment, but sometimes with a change in color, texture, thickness or style. Sometimes hair has a tendency to curl after regrowth, resulting in "chemo curls." Severe hair loss occurs most often with drugs such as doxorubicin, daunorubicin, paclitaxel, docetaxel, cyclophosphamide, ifosfamide and etoposide. Permanent thinning or hair loss can result from some standard chemotherapy regimens. Chemotherapy induced hair loss occurs by a non-androgenic mechanism, and can manifest as alopecia totalis, telogen effluvium, or less often alopecia areata. It is usually associated with systemic treatment due to the high mitotic rate of hair follicles, and more reversible than androgenic hair loss, although permanent cases can occur. Chemotherapy induces hair loss in women more often than men. Scalp cooling offers a means of preventing both permanent and temporary hair loss; however, concerns about this method have been raised. Secondary neoplasm Development of secondary neoplasia after successful chemotherapy or radiotherapy treatment can occur. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors. Survivors of childhood cancer are more than 13 times as likely to get a secondary neoplasm during the 30 years after treatment than the general population. Not all of this increase can be attributed to chemotherapy. Infertility Some types of chemotherapy are gonadotoxic and may cause infertility. Chemotherapies with high risk include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil, and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin, and antimetabolites such as methotrexate, mercaptopurine, and 5-fluorouracil. Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles. People may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of semen, ovarian tissue, oocytes, or embryos. As more than half of cancer patients are elderly, this adverse effect is only relevant for a minority of patients. A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years. Potential protective or attenuating agents include GnRH analogs, where several studies have shown a protective effect in vivo in humans, but some studies show no such effect. Sphingosine-1-phosphate (S1P) has shown similar effect, but its mechanism of inhibiting the sphingomyelin apoptotic pathway may also interfere with the apoptosis action of chemotherapy drugs. In chemotherapy as a conditioning regimen in hematopoietic stem cell transplantation, a study of people conditioned with cyclophosphamide alone for severe aplastic anemia came to the result that ovarian recovery occurred in all women younger than 26 years at time of transplantation, but only in five of 16 women older than 26 years. Teratogenicity Chemotherapy is teratogenic during pregnancy, especially during the first trimester, to the extent that abortion usually is recommended if pregnancy in this period is found during chemotherapy. Second- and third-trimester exposure does not usually increase the teratogenic risk and adverse effects on cognitive development, but it may increase the risk of various complications of pregnancy and fetal myelosuppression. Female patients of reproductive potential should use effective contraception during chemotherapy and for a few months after the last dose (e.g. 6 month for doxorubicin). In males previously having undergone chemotherapy or radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. The use of assisted reproductive technologies and micromanipulation techniques might increase this risk. In females previously having undergone chemotherapy, miscarriage and congenital malformations are not increased in subsequent conceptions. However, when in vitro fertilization and embryo cryopreservation is practised between or shortly after treatment, possible genetic risks to the growing oocytes exist, and hence it has been recommended that the babies be screened. Peripheral neuropathy Between 30 and 40 percent of people undergoing chemotherapy experience chemotherapy-induced peripheral neuropathy (CIPN), a progressive, enduring, and often irreversible condition, causing pain, tingling, numbness and sensitivity to cold, beginning in the hands and feet and sometimes progressing to the arms and legs. Chemotherapy drugs associated with CIPN include thalidomide, epothilones, vinca alkaloids, taxanes, proteasome inhibitors, and the platinum-based drugs. Whether CIPN arises, and to what degree, is determined by the choice of drug, duration of use, the total amount consumed and whether the person already has peripheral neuropathy. Though the symptoms are mainly sensory, in some cases motor nerves and the autonomic nervous system are affected. CIPN often follows the first chemotherapy dose and increases in severity as treatment continues, but this progression usually levels off at completion of treatment. The platinum-based drugs are the exception; with these drugs, sensation may continue to deteriorate for several months after the end of treatment. Some CIPN appears to be irreversible. Pain can often be managed with drug or other treatment but the numbness is usually resistant to treatment. Cognitive impairment Some people receiving chemotherapy report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as "chemo brain" in popular and social media. Tumor lysis syndrome In particularly large tumors and cancers with high white cell counts, such as lymphomas, teratomas, and some leukemias, some people develop tumor lysis syndrome. The rapid breakdown of cancer cells causes the release of chemicals from the inside of the cells. Following this, high levels of uric acid, potassium and phosphate are found in the blood. High levels of phosphate induce secondary hypoparathyroidism, resulting in low levels of calcium in the blood. This causes kidney damage and the high levels of potassium can cause cardiac arrhythmia. Although prophylaxis is available and is often initiated in people with large tumors, this is a dangerous side-effect that can lead to death if left untreated. Organ damage Cardiotoxicity (heart damage) is especially prominent with the use of anthracycline drugs (doxorubicin, epirubicin, idarubicin, and liposomal doxorubicin). The cause of this is most likely due to the production of free radicals in the cell and subsequent DNA damage. Other chemotherapeutic agents that cause cardiotoxicity, but at a lower incidence, are cyclophosphamide, docetaxel and clofarabine. Hepatotoxicity (liver damage) can be caused by many cytotoxic drugs. The susceptibility of an individual to liver damage can be altered by other factors such as the cancer itself, viral hepatitis, immunosuppression and nutritional deficiency. The liver damage can consist of damage to liver cells, hepatic sinusoidal syndrome (obstruction of the veins in the liver), cholestasis (where bile does not flow from the liver to the intestine) and liver fibrosis. Nephrotoxicity (kidney damage) can be caused by tumor lysis syndrome and also due direct effects of drug clearance by the kidneys. Different drugs will affect different parts of the kidney and the toxicity may be asymptomatic (only seen on blood or urine tests) or may cause acute kidney injury. Ototoxicity (damage to the inner ear) is a common side effect of platinum based drugs that can produce symptoms such as dizziness and vertigo. Children treated with platinum analogues have been found to be at risk for developing hearing loss. Other side-effects Less common side-effects include red skin (erythema), dry skin, damaged fingernails, a dry mouth (xerostomia), water retention, and sexual impotence. Some medications can trigger allergic or pseudoallergic reactions. Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease). Hand-foot syndrome is another side effect to cytotoxic chemotherapy. Nutritional problems are also frequently seen in cancer patients at diagnosis and through chemotherapy treatment. Research suggests that in children and young people undergoing cancer treatment, parenteral nutrition may help with this leading to weight gain and increased calorie and protein intake, when compared to enteral nutrition. Limitations Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer. The blood–brain barrier poses an obstacle to delivery of chemotherapy to the brain. This is because the brain has an extensive system in place to protect it from harmful chemicals. Drug transporters can pump out drugs from the brain and brain's blood vessel cells into the cerebrospinal fluid and blood circulation. These transporters pump out most chemotherapy drugs, which reduces their efficacy for treatment of brain tumors. Only small lipophilic alkylating agents such as lomustine or temozolomide are able to cross this blood–brain barrier. Blood vessels in tumors are very different from those seen in normal tissues. As a tumor grows, tumor cells furthest away from the blood vessels become low in oxygen (hypoxic). To counteract this they then signal for new blood vessels to grow. The newly formed tumor vasculature is poorly formed and does not deliver an adequate blood supply to all areas of the tumor. This leads to issues with drug delivery because many drugs will be delivered to the tumor by the circulatory system. Resistance Resistance is a major cause of treatment failure in chemotherapeutic drugs. There are a few possible causes of resistance in cancer, one of which is the presence of small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Cancer cells produce high amounts of these pumps, known as p-glycoprotein, in order to protect themselves from chemotherapeutics. Research on p-glycoprotein and other such chemotherapy efflux pumps is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing investigation, but due to toxicities and interactions with anti-cancer drugs their development has been difficult. Another mechanism of resistance is gene amplification, a process in which multiple copies of a gene are produced by cancer cells. This overcomes the effect of drugs that reduce the expression of genes involved in replication. With more copies of the gene, the drug can not prevent all expression of the gene and therefore the cell can restore its proliferative ability. Cancer cells can also cause defects in the cellular pathways of apoptosis (programmed cell death). As most chemotherapy drugs kill cancer cells in this manner, defective apoptosis allows survival of these cells, making them resistant. Many chemotherapy drugs also cause DNA damage, which can be repaired by enzymes in the cell that carry out DNA repair. Upregulation of these genes can overcome the DNA damage and prevent the induction of apoptosis. Mutations in genes that produce drug target proteins, such as tubulin, can occur which prevent the drugs from binding to the protein, leading to resistance to these types of drugs. Drugs used in chemotherapy can induce cell stress, which can kill a cancer cell; however, under certain conditions, cells stress can induce changes in gene expression that enables resistance to several types of drugs. In lung cancer, the transcription factor NFκB is thought to play a role in resistance to chemotherapy, via inflammatory pathways. Cytotoxics and targeted therapies Targeted therapies are a relatively new class of cancer drugs that can overcome many of the issues seen with the use of cytotoxics. They are divided into two groups: small molecule and antibodies. The massive toxicity seen with the use of cytotoxics is due to the lack of cell specificity of the drugs. They will kill any rapidly dividing cell, tumor or normal. Targeted therapies are designed to affect cellular proteins or processes that are utilised by the cancer cells. This allows a high dose to cancer tissues with a relatively low dose to other tissues. Although the side effects are often less severe than that seen of cytotoxic chemotherapeutics, life-threatening effects can occur. Initially, the targeted therapeutics were supposed to be solely selective for one protein. Now it is clear that there is often a range of protein targets that the drug can bind. An example target for targeted therapy is the BCR-ABL1 protein produced from the Philadelphia chromosome, a genetic lesion found commonly in chronic myelogenous leukemia and in some patients with acute lymphoblastic leukemia. This fusion protein has enzyme activity that can be inhibited by imatinib, a small molecule drug. Mechanism of action Cancer is the uncontrolled growth of cells coupled with malignant behaviour: invasion and metastasis (among other features). It is caused by the interaction between genetic susceptibility and environmental factors. These factors lead to accumulations of genetic mutations in oncogenes (genes that control the growth rate of cells) and tumor suppressor genes (genes that help to prevent cancer), which gives cancer cells their malignant characteristics, such as uncontrolled growth. In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells, they are termed cytotoxic. They prevent mitosis by various mechanisms including damaging DNA and inhibition of the cellular machinery involved in cell division. One theory as to why these drugs kill cancer cells is that they induce a programmed form of cell death known as apoptosis. As chemotherapy affects cell division, tumors with high growth rates (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly. Heterogeneic tumours may also display varying sensitivities to chemotherapy agents, depending on the subclonal populations within the tumor. Cells from the immune system also make crucial contributions to the antitumor effects of chemotherapy. For example, the chemotherapeutic drugs oxaliplatin and cyclophosphamide can cause tumor cells to die in a way that is detectable by the immune system (called immunogenic cell death), which mobilizes immune cells with antitumor functions. Chemotherapeutic drugs that cause cancer immunogenic tumor cell death can make unresponsive tumors sensitive to immune checkpoint therapy. Other uses Some chemotherapy drugs are used in diseases other than cancer, such as in autoimmune disorders, and noncancerous plasma cell dyscrasia. In some cases they are often used at lower doses, which means that the side effects are minimized, while in other cases doses similar to ones used to treat cancer are used. Methotrexate is used in the treatment of rheumatoid arthritis (RA), psoriasis, ankylosing spondylitis and multiple sclerosis. The anti-inflammatory response seen in RA is thought to be due to increases in adenosine, which causes immunosuppression; effects on immuno-regulatory cyclooxygenase-2 enzyme pathways; reduction in pro-inflammatory cytokines; and anti-proliferative properties. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases is still uncertain. Cyclophosphamide is sometimes used to treat lupus nephritis, a common symptom of systemic lupus erythematosus. Dexamethasone along with either bortezomib or melphalan is commonly used as a treatment for AL amyloidosis. Recently, bortezomid in combination with cyclophosphamide and dexamethasone has also shown promise as a treatment for AL amyloidosis. Other drugs used to treat myeloma such as lenalidomide have shown promise in treating AL amyloidosis. Chemotherapy drugs are also used in conditioning regimens prior to bone marrow transplant (hematopoietic stem cell transplant). Conditioning regimens are used to suppress the recipient's immune system in order to allow a transplant to engraft. Cyclophosphamide is a common cytotoxic drug used in this manner and is often used in conjunction with total body irradiation. Chemotherapeutic drugs may be used at high doses to permanently remove the recipient's bone marrow cells (myeloablative conditioning) or at lower doses that will prevent permanent bone marrow loss (non-myeloablative and reduced intensity conditioning). When used in non-cancer setting, the treatment is still called "chemotherapy", and is often done in the same treatment centers used for people with cancer. Occupational exposure and safe handling In the 1970s, antineoplastic (chemotherapy) drugs were identified as hazardous, and the American Society of Health-System Pharmacists (ASHP) has since then introduced the concept of hazardous drugs after publishing a recommendation in 1983 regarding handling hazardous drugs. The adaptation of federal regulations came when the U.S. Occupational Safety and Health Administration (OSHA) first released its guidelines in 1986 and then updated them in 1996, 1999, and, most recently, 2006. The National Institute for Occupational Safety and Health (NIOSH) has been conducting an assessment in the workplace since then regarding these drugs. Occupational exposure to antineoplastic drugs has been linked to multiple health effects, including infertility and possible carcinogenic effects. A few cases have been reported by the NIOSH alert report, such as one in which a female pharmacist was diagnosed with papillary transitional cell carcinoma. Twelve years before the pharmacist was diagnosed with the condition, she had worked for 20 months in a hospital where she was responsible for preparing multiple antineoplastic drugs. The pharmacist did not have any other risk factor for cancer, and therefore, her cancer was attributed to the exposure to the antineoplastic drugs, although a cause-and-effect relationship has not been established in the literature. Another case happened when a malfunction in biosafety cabinetry is believed to have exposed nursing personnel to antineoplastic drugs. Investigations revealed evidence of genotoxic biomarkers two and nine months after that exposure. Routes of exposure Antineoplastic drugs are usually given through intravenous, intramuscular, intrathecal, or subcutaneous administration. In most cases, before the medication is administered to the patient, it needs to be prepared and handled by several workers. Any worker who is involved in handling, preparing, or administering the drugs, or with cleaning objects that have come into contact with antineoplastic drugs, is potentially exposed to hazardous drugs. Health care workers are exposed to drugs in different circumstances, such as when pharmacists and pharmacy technicians prepare and handle antineoplastic drugs and when nurses and physicians administer the drugs to patients. Additionally, those who are responsible for disposing antineoplastic drugs in health care facilities are also at risk of exposure. Dermal exposure is thought to be the main route of exposure due to the fact that significant amounts of the antineoplastic agents have been found in the gloves worn by healthcare workers who prepare, handle, and administer the agents. Another noteworthy route of exposure is inhalation of the drugs' vapors. Multiple studies have investigated inhalation as a route of exposure, and although air sampling has not shown any dangerous levels, it is still a potential route of exposure. Ingestion by hand to mouth is a route of exposure that is less likely compared to others because of the enforced hygienic standard in the health institutions. However, it is still a potential route, especially in the workplace, outside of a health institute. One can also be exposed to these hazardous drugs through injection by needle sticks. Research conducted in this area has established that occupational exposure occurs by examining evidence in multiple urine samples from health care workers. Hazards Hazardous drugs expose health care workers to serious health risks. Many studies show that antineoplastic drugs could have many side effects on the reproductive system, such as fetal loss, congenital malformation, and infertility. Health care workers who are exposed to antineoplastic drugs on many occasions have adverse reproductive outcomes such as spontaneous abortions, stillbirths, and congenital malformations. Moreover, studies have shown that exposure to these drugs leads to menstrual cycle irregularities. Antineoplastic drugs may also increase the risk of learning disabilities among children of health care workers who are exposed to these hazardous substances. Moreover, these drugs have carcinogenic effects. In the past five decades, multiple studies have shown the carcinogenic effects of exposure to antineoplastic drugs. Similarly, there have been research studies that linked alkylating agents with humans developing leukemias. Studies have reported elevated risk of breast cancer, nonmelanoma skin cancer, and cancer of the rectum among nurses who are exposed to these drugs. Other investigations revealed that there is a potential genotoxic effect from anti-neoplastic drugs to workers in health care settings. Safe handling in health care settings As of 2018, there were no occupational exposure limits set for antineoplastic drugs, i.e., OSHA or the American Conference of Governmental Industrial Hygienists (ACGIH) have not set workplace safety guidelines. Preparation NIOSH recommends using a ventilated cabinet that is designed to decrease worker exposure. Additionally, it recommends training of all staff, the use of cabinets, implementing an initial evaluation of the technique of the safety program, and wearing protective gloves and gowns when opening drug packaging, handling vials, or labeling. When wearing personal protective equipment, one should inspect gloves for physical defects before use and always wear double gloves and protective gowns. Health care workers are also required to wash their hands with water and soap before and after working with antineoplastic drugs, change gloves every 30 minutes or whenever punctured, and discard them immediately in a chemotherapy waste container. The gowns used should be disposable gowns made of polyethylene-coated polypropylene. When wearing gowns, individuals should make sure that the gowns are closed and have long sleeves. When preparation is done, the final product should be completely sealed in a plastic bag. The health care worker should also wipe all waste containers inside the ventilated cabinet before removing them from the cabinet. Finally, workers should remove all protective wear and put them in a bag for their disposal inside the ventilated cabinet. Administration Drugs should only be administered using protective medical devices such as needle lists and closed systems and techniques such as priming of IV tubing by pharmacy personnel inside a ventilated cabinet. Workers should always wear personal protective equipment such as double gloves, goggles, and protective gowns when opening the outer bag and assembling the delivery system to deliver the drug to the patient, and when disposing of all material used in the administration of the drugs. Hospital workers should never remove tubing from an IV bag that contains an antineoplastic drug, and when disconnecting the tubing in the system, they should make sure the tubing has been thoroughly flushed. After removing the IV bag, the workers should place it together with other disposable items directly in the yellow chemotherapy waste container with the lid closed. Protective equipment should be removed and put into a disposable chemotherapy waste container. After this has been done, one should double bag the chemotherapy waste before or after removing one's inner gloves. Moreover, one must always wash one's hands with soap and water before leaving the drug administration site. Employee training All employees whose jobs in health care facilities expose them to hazardous drugs must receive training. Training should include shipping and receiving personnel, housekeepers, pharmacists, assistants, and all individuals involved in the transportation and storage of antineoplastic drugs. These individuals should receive information and training to inform them of the hazards of the drugs present in their areas of work. They should be informed and trained on operations and procedures in their work areas where they can encounter hazards, different methods used to detect the presence of hazardous drugs and how the hazards are released, and the physical and health hazards of the drugs, including their reproductive and carcinogenic hazard potential. Additionally, they should be informed and trained on the measures they should take to avoid and protect themselves from these hazards. This information ought to be provided when health care workers come into contact with the drugs, that is, perform the initial assignment in a work area with hazardous drugs. Moreover, training should also be provided when new hazards emerge as well as when new drugs, procedures, or equipment are introduced. Housekeeping and waste disposal When performing cleaning and decontaminating the work area where antineoplastic drugs are used, one should make sure that there is sufficient ventilation to prevent the buildup of airborne drug concentrations. When cleaning the work surface, hospital workers should use deactivation and cleaning agents before and after each activity as well as at the end of their shifts. Cleaning should always be done using double protective gloves and disposable gowns. After employees finish up cleaning, they should dispose of the items used in the activity in a yellow chemotherapy waste container while still wearing protective gloves. After removing the gloves, they should thoroughly wash their hands with soap and water. Anything that comes into contact or has a trace of the antineoplastic drugs, such as needles, empty vials, syringes, gowns, and gloves, should be put in the chemotherapy waste container. Spill control A written policy needs to be in place in case of a spill of antineoplastic products. The policy should address the possibility of various sizes of spills as well as the procedure and personal protective equipment required for each size. A trained worker should handle a large spill and always dispose of all cleanup materials in the chemical waste container according to EPA regulations, not in a yellow chemotherapy waste container. Occupational monitoring A medical surveillance program must be established. In case of exposure, occupational health professionals need to ask for a detailed history and do a thorough physical exam. They should test the urine of the potentially exposed worker by doing a urine dipstick or microscopic examination, mainly looking for blood, as several antineoplastic drugs are known to cause bladder damage. Urinary mutagenicity is a marker of exposure to antineoplastic drugs that was first used by Falck and colleagues in 1979 and uses bacterial mutagenicity assays. Apart from being nonspecific, the test can be influenced by extraneous factors such as dietary intake and smoking and is, therefore, used sparingly. However, the test played a significant role in changing the use of horizontal flow cabinets to vertical flow biological safety cabinets during the preparation of antineoplastic drugs because the former exposed health care workers to high levels of drugs. This changed the handling of drugs and effectively reduced workers' exposure to antineoplastic drugs. Biomarkers of exposure to antineoplastic drugs commonly include urinary platinum, methotrexate, urinary cyclophosphamide and ifosfamide, and urinary metabolite of 5-fluorouracil. In addition to this, there are other drugs used to measure the drugs directly in the urine, although they are rarely used. A measurement of these drugs directly in one's urine is a sign of high exposure levels and that an uptake of the drugs is happening either through inhalation or dermally. Available agents There is an extensive list of antineoplastic agents. Several classification schemes have been used to subdivide the medicines used for cancer into several different types. History The first use of small-molecule drugs to treat cancer was in the early 20th century, although the specific chemicals first used were not originally intended for that purpose. Mustard gas was used as a chemical warfare agent during World War I and was discovered to be a potent suppressor of hematopoiesis (blood production). A similar family of compounds known as nitrogen mustards were studied further during World War II at the Yale School of Medicine. It was reasoned that an agent that damaged the rapidly growing white blood cells might have a similar effect on cancer. Therefore, in December 1942, several people with advanced lymphomas (cancers of the lymphatic system and lymph nodes) were given the drug by vein, rather than by breathing the irritating gas. Their improvement, although temporary, was remarkable. Concurrently, during a military operation in World War II, following a German air raid on the Italian harbour of Bari, several hundred people were accidentally exposed to mustard gas, which had been transported there by the Allied forces to prepare for possible retaliation in the event of German use of chemical warfare. The survivors were later found to have very low white blood cell counts. After WWII was over and the reports declassified, the experiences converged and led researchers to look for other substances that might have similar effects against cancer. The first chemotherapy drug to be developed from this line of research was mustine. Since then, many other drugs have been developed to treat cancer, and drug development has exploded into a multibillion-dollar industry, although the principles and limitations of chemotherapy discovered by the early researchers still apply. The term chemotherapy The word chemotherapy without a modifier usually refers to cancer treatment, but its historical meaning was broader. The term was coined in the early 1900s by Paul Ehrlich as meaning any use of chemicals to treat any disease (chemo- + -therapy), such as the use of antibiotics (antibacterial chemotherapy). Ehrlich was not optimistic that effective chemotherapy drugs would be found for the treatment of cancer. The first modern chemotherapeutic agent was arsphenamine, an arsenic compound discovered in 1907 and used to treat syphilis. This was later followed by sulfonamides (sulfa drugs) and penicillin. In today's usage, the sense "any treatment of disease with drugs" is often expressed with the word pharmacotherapy. Research Targeted delivery vehicles Specially targeted delivery vehicles aim to increase effective levels of chemotherapy for tumor cells while reducing effective levels for other cells. This should result in an increased tumor kill or reduced toxicity or both. Antibody-drug conjugates Antibody-drug conjugates (ADCs) comprise an antibody, drug and a linker between them. The antibody will be targeted at a preferentially expressed protein in the tumour cells (known as a tumor antigen) or on cells that the tumor can utilise, such as blood vessel endothelial cells. They bind to the tumor antigen and are internalised, where the linker releases the drug into the cell. These specially targeted delivery vehicles vary in their stability, selectivity, and choice of target, but, in essence, they all aim to increase the maximum effective dose that can be delivered to the tumor cells. Reduced systemic toxicity means that they can also be used in people who are sicker and that they can carry new chemotherapeutic agents that would have been far too toxic to deliver via traditional systemic approaches. The first approved drug of this type was gemtuzumab ozogamicin (Mylotarg), released by Wyeth (now Pfizer). The drug was approved to treat acute myeloid leukemia. Two other drugs, trastuzumab emtansine and brentuximab vedotin, are both in late clinical trials, and the latter has been granted accelerated approval for the treatment of refractory Hodgkin's lymphoma and systemic anaplastic large cell lymphoma. Nanoparticles Nanoparticles are 1–1000 nanometer (nm) sized particles that can promote tumor selectivity and aid in delivering low-solubility drugs. Nanoparticles can be targeted passively or actively. Passive targeting exploits the difference between tumor blood vessels and normal blood vessels. Blood vessels in tumors are "leaky" because they have gaps from 200 to 2000 nm, which allow nanoparticles to escape into the tumor. Active targeting uses biological molecules (antibodies, proteins, DNA and receptor ligands) to preferentially target the nanoparticles to the tumor cells. There are many types of nanoparticle delivery systems, such as silica, polymers, liposomes and magnetic particles. Nanoparticles made of magnetic material can also be used to concentrate agents at tumor sites using an externally applied magnetic field. They have emerged as a useful vehicle in magnetic drug delivery for poorly soluble agents such as paclitaxel. Electrochemotherapy Electrochemotherapy is the combined treatment in which injection of a chemotherapeutic drug is followed by application of high-voltage electric pulses locally to the tumor. The treatment enables the chemotherapeutic drugs, which otherwise cannot or hardly go through the membrane of cells (such as bleomycin and cisplatin), to enter the cancer cells. Hence, greater effectiveness of antitumor treatment is achieved. Clinical electrochemotherapy has been successfully used for treatment of cutaneous and subcutaneous tumors irrespective of their histological origin. The method has been reported as safe, simple and highly effective in all reports on clinical use of electrochemotherapy. According to the ESOPE project (European Standard Operating Procedures of Electrochemotherapy), the Standard Operating Procedures (SOP) for electrochemotherapy were prepared, based on the experience of the leading European cancer centres on electrochemotherapy. Recently, new electrochemotherapy modalities have been developed for treatment of internal tumors using surgical procedures, endoscopic routes or percutaneous approaches to gain access to the treatment area. Hyperthermia therapy Hyperthermia therapy is heat treatment for cancer that can be a powerful tool when used in combination with chemotherapy (thermochemotherapy) or radiation for the control of a variety of cancers. The heat can be applied locally to the tumor site, which will dilate blood vessels to the tumor, allowing more chemotherapeutic medication to enter the tumor. Additionally, the tumor cell membrane will become more porous, further allowing more of the chemotherapeutic medicine to enter the tumor cell. Hyperthermia has also been shown to help prevent or reverse "chemo-resistance." Chemotherapy resistance sometimes develops over time as the tumors adapt and can overcome the toxicity of the chemo medication. "Overcoming chemoresistance has been extensively studied within the past, especially using CDDP-resistant cells. In regard to the potential benefit that drug-resistant cells can be recruited for effective therapy by combining chemotherapy with hyperthermia, it was important to show that chemoresistance against several anticancer drugs (e.g. mitomycin C, anthracyclines, BCNU, melphalan) including CDDP could be reversed at least partially by the addition of heat. Other animals Chemotherapy is used in veterinary medicine similar to how it is used in human medicine.
Biology and health sciences
Medical procedures
null
7220
https://en.wikipedia.org/wiki/Common%20Gateway%20Interface
Common Gateway Interface
In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program to process HTTP or HTTPS user requests. Such programs are often written in a scripting language and are commonly referred to as CGI scripts, but they may include compiled programs. A typical use case occurs when a web user submits a web form on a web page that uses CGI. The form's data is sent to the web server within an HTTP request with a URL denoting a CGI script. The web server then launches the CGI script in a new computer process, passing the form data to it. The CGI script passes its output, usually in the form of HTML, to the Web server, and the server relays it back to the browser as its response to the browser's request. Developed in the early 1990s, CGI was the earliest common method available that allowed a web page to be interactive. Due to a necessity to run CGI scripts in a separate process every time the request comes in from a client, various alternatives were developed. History In 1993, the National Center for Supercomputing Applications (NCSA) team wrote the specification for calling command line executables on the www-talk mailing list. The other Web server developers adopted it, and it has been a standard for Web servers ever since. A work group chaired by Ken Coar started in November 1997 to get the NCSA definition of CGI more formally defined. This work resulted in RFC 3875, which specified CGI Version 1.1. Specifically mentioned in the RFC are the following contributors: Rob McCool (author of the NCSA HTTPd Web server) John Franks (author of the GN Web server) Ari Luotonen (the developer of the CERN httpd Web server) Tony Sanders (author of the Plexus Web server) George Phillips (Web server maintainer at the University of British Columbia) Historically CGI programs were often written using the C programming language. RFC 3875 "The Common Gateway Interface (CGI)" partially defines CGI using C, in saying that environment variables "are accessed by the C library routine getenv() or variable environ". The name CGI comes from the early days of the Web, where webmasters wanted to connect legacy information systems such as databases to their Web servers. The CGI program was executed by the server and provided a common "gateway" between the Web server and the legacy information system. Purpose Traditionally a Web server has a directory which is designated as a document collection, that is, a set of files that can be sent to Web browsers connected to the server. For example, if a web server has the fully-qualified domain name www.example.com, and its document collection is stored at /usr/local/apache/htdocs/ in the local file system (its document root), then the web server will respond to a request for http://www.example.com/index.html by sending to the browser a copy of the file /usr/local/apache/htdocs/index.html (if it exists). For pages constructed on the fly, the server software may defer requests to separate programs and relay the results to the requesting client (usually, a Web browser that displays the page to the end user). Such programs usually require some additional information to be specified with the request, such as query strings or cookies. Conversely, upon returning, the script must provide all the information required by HTTP for a response to the request: the HTTP status of the request, the document content (if available), the document type (e.g. HTML, PDF, or plain text), et cetera. Initially, there were no standardized methods for data exchange between a browser, the HTTP server with which it was communicating and the scripts on the server that were expected to process the data and ultimately return a result to the browser. As a result, mutual incompatibilities existed between different HTTP server variants that undermined script portability. Recognition of this problem led to the specification of how data exchange was to be carried out, resulting in the development of CGI. Web page-generating programs invoked by server software that adheres to the CGI specification are known as CGI scripts, even though they may actually have been written in a non-scripting language, such as C. The CGI specification was quickly adopted and continues to be supported by all well-known HTTP server packages, such as Apache, Microsoft IIS, and (with an extension) Node.js-based servers. An early use of CGI scripts was to process forms. In the beginning of HTML, HTML forms typically had an "action" attribute and a button designated as the "submit" button. When the submit button is pushed the URI specified in the "action" attribute would be sent to the server with the data from the form sent as a query string. If the "action" specifies a CGI script then the CGI script would be executed, the script in turn generating an HTML page. Deployment A Web server that supports CGI can be configured to interpret a URL that it serves as a reference to a CGI script. A common convention is to have a cgi-bin/ directory at the base of the directory tree and treat all executable files within this directory (and no other, for security) as CGI scripts. When a Web browser requests a URL that points to a file within the CGI directory (e.g., http://example.com/cgi-bin/printenv.pl/with/additional/path?and=a&query=string), then, instead of simply sending that file (/usr/local/apache/htdocs/cgi-bin/printenv.pl) to the Web browser, the HTTP server runs the specified script and passes the output of the script to the Web browser. That is, anything that the script sends to standard output is passed to the Web client instead of being shown in the terminal window that started the web server. Another popular convention is to use filename extensions; for instance, if CGI scripts are consistently given the extension .cgi, the Web server can be configured to interpret all such files as CGI scripts. While convenient, and required by many prepackaged scripts, it opens the server to attack if a remote user can upload executable code with the proper extension. The CGI specification defines how additional information passed with the request is passed to the script. The Web server creates a subset of the environment variables passed to it and adds details pertinent to the HTTP environment. For instance, if a slash and additional directory name(s) are appended to the URL immediately after the name of the script (in this example, /with/additional/path), then that path is stored in the PATH_INFO environment variable before the script is called. If parameters are sent to the script via an HTTP GET request (a question mark appended to the URL, followed by param=value pairs; in the example, ?and=a&query=string), then those parameters are stored in the QUERY_STRING environment variable before the script is called. Request HTTP message body, such as form parameters sent via an HTTP POST request, are passed to the script's standard input. The script can then read these environment variables or data from standard input and adapt to the Web browser's request. Uses CGI is often used to process input information from the user and produce the appropriate output. An example of a CGI program is one implementing a wiki. If the user agent requests the name of an entry, the Web server executes the CGI program. The CGI program retrieves the source of that entry's page (if one exists), transforms it into HTML, and prints the result. The Web server receives the output from the CGI program and transmits it to the user agent. Then if the user agent clicks the "Edit page" button, the CGI program populates an HTML textarea or other editing control with the page's contents. Finally if the user agent clicks the "Publish page" button, the CGI program transforms the updated HTML into the source of that entry's page and saves it. Security CGI programs run, by default, in the security context of the Web server. When first introduced a number of example scripts were provided with the reference distributions of the NCSA, Apache and CERN Web servers to show how shell scripts or C programs could be coded to make use of the new CGI. One such example script was a CGI program called PHF that implemented a simple phone book. In common with a number of other scripts at the time, this script made use of a function: escape_shell_cmd(). The function was supposed to sanitize its argument, which came from user input and then pass the input to the Unix shell, to be run in the security context of the Web server. The script did not correctly sanitize all input and allowed new lines to be passed to the shell, which effectively allowed multiple commands to be run. The results of these commands were then displayed on the Web server. If the security context of the Web server allowed it, malicious commands could be executed by attackers. This was the first widespread example of a new type of Web-based attack called code injection, where unsanitized data from Web users could lead to execution of code on a Web server. Because the example code was installed by default, attacks were widespread and led to a number of security advisories in early 1996. Alternatives For each incoming HTTP request, a Web server creates a new CGI process for handling it and destroys the CGI process after the HTTP request has been handled. Creating and destroying a process can consume more CPU time and memory resources than the actual work of generating the output of the process, especially when the CGI program still needs to be interpreted by a virtual machine. For a high number of HTTP requests, the resulting workload can quickly overwhelm the Web server. The computational overhead involved in CGI process creation and destruction can be reduced by the following techniques: CGI programs precompiled to machine code, e.g. precompiled from C or C++ programs, rather than CGI programs executed by an interpreter, e.g. Perl, PHP or Python programs. Web server extensions such as Apache modules (e.g. mod_perl, mod_php and mod_python), NSAPI plugins, and ISAPI plugins which allow long-running application processes handling more than one request and hosted within the Web server. FastCGI, SCGI, and AJP which allow long-running application processes handling more than one request to be hosted externally; i.e., separately from the Web server. Each application process listens on a socket; the Web server handles an HTTP request and sends it via another protocol (FastCGI, SCGI or AJP) to the socket only for dynamic content, while static content is usually handled directly by the Web server. This approach needs fewer application processes so consumes less memory than the Web server extension approach. And unlike converting an application program to a Web server extension, FastCGI, SCGI, and AJP application programs remain independent of the Web server. Jakarta EE runs Jakarta Servlet applications in a Web container to serve dynamic content and optionally static content which replaces the overhead of creating and destroying processes with the much lower overhead of creating and destroying threads. It also exposes the programmer to the library that comes with Java SE on which the version of Jakarta EE in use is based. Standalone HTTP Server Web Server Gateway Interface (WSGI) is a modern approach written in the Python programming language. It is defined by PEP 3333 and implemented via various methods like mod_wsgi (Apache module), Gunicorn web server (in between of Nginx & Scripts/Frameworks like Django), UWSGI, etc. The optimal configuration for any Web application depends on application-specific details, amount of traffic, and complexity of the transaction; these trade-offs need to be analyzed to determine the best implementation for a given task and time budget. Web frameworks offer an alternative to using CGI scripts to interact with user agents.
Technology
Internet
null
7227
https://en.wikipedia.org/wiki/Comet%20Hale%E2%80%93Bopp
Comet Hale–Bopp
Comet Hale–Bopp (formally designated C/1995 O1) is a long-period comet that was one of the most widely observed of the 20th century and one of the brightest seen for many decades. Alan Hale and Thomas Bopp discovered Comet Hale–Bopp separately on July 23, 1995, before it became visible to the naked eye. It is difficult to predict the maximum brightness of new comets with any degree of certainty, but Hale–Bopp exceeded most predictions when it passed perihelion on April 1, 1997, reaching about magnitude −1.8. It was visible to the naked eye for a record 18 months, due to its massive nucleus size. This is twice as long as the Great Comet of 1811, the previous record holder. Accordingly, Hale–Bopp was dubbed the Great Comet of 1997. Discovery The comet was discovered independently on July 23, 1995, by two observers, Alan Hale and Thomas Bopp, both in the United States. Hale had spent many hundreds of hours searching for comets without success, and was tracking known comets from his driveway in New Mexico when he chanced upon Hale–Bopp just after midnight. The comet had an apparent magnitude of 10.5 and lay near the globular cluster M70 in the constellation of Sagittarius. Hale first established that there was no other deep-sky object near M70, and then consulted a directory of known comets, finding that none were known to be in this area of the sky. Once he had established that the object was moving relative to the background stars, he emailed the Central Bureau for Astronomical Telegrams, the clearing house for astronomical discoveries. Bopp did not own a telescope. He was out with friends near Stanfield, Arizona, observing star clusters and galaxies when he chanced across the comet while at the eyepiece of his friend's telescope. He realized he might have spotted something new when, like Hale, he checked his star maps to determine if any other deep-sky objects were known to be near M70, and found none. He alerted the Central Bureau for Astronomical Telegrams through a Western Union telegram. Brian G. Marsden, who had run the bureau since 1968, laughed, "Nobody sends telegrams anymore. I mean, by the time that telegram got here, Alan Hale had already e-mailed us three times with updated coordinates." The following morning, it was confirmed that this was a new comet, and it was given the designation C/1995 O1. The discovery was announced in International Astronomical Union circular 6187. Early observation Hale–Bopp's orbital position was calculated as 7.2 astronomical units (au) from the Sun, placing it between Jupiter and Saturn and by far the greatest distance from Earth at which a comet had been discovered by amateurs. Most comets at this distance are extremely faint, and show no discernible activity, but Hale–Bopp already had an observable coma. A precovery image taken at the UK Schmidt Telescope in 1993 was found to show the then-unnoticed comet some 13 au from the Sun, a distance at which most comets are essentially unobservable. (Halley's Comet was more than 100 times fainter at the same distance from the Sun.) Analysis indicated later that its comet nucleus was 60±20 kilometres in diameter, approximately six times the size of Halley's Comet. Its great distance and surprising activity indicated that comet Hale–Bopp might become very bright when it reached perihelion in 1997. However, comet scientists were wary – comets can be extremely unpredictable, and many have large outbursts at great distances only to diminish in brightness later. Comet Kohoutek in 1973 had been touted as a "comet of the century" and turned out to be unspectacular. Perihelion Hale–Bopp became visible to the naked eye in May 1996, and although its rate of brightening slowed considerably during the latter half of that year, scientists were still cautiously optimistic that it would become very bright. It was too closely aligned with the Sun to be observable during December 1996, but when it reappeared in January 1997 it was already bright enough to be seen by anyone who looked for it, even from large cities with light-polluted skies. The Internet was a growing phenomenon at the time, and numerous websites that tracked the comet's progress and provided daily images from around the world became extremely popular. The Internet played a large role in encouraging the unprecedented public interest in comet Hale–Bopp. As the comet approached the Sun, it continued to brighten, shining at 2nd magnitude in February, and showing a growing pair of tails, the blue gas tail pointing straight away from the Sun and the yellowish dust tail curving away along its orbit. On March 9, a solar eclipse in China, Mongolia and eastern Siberia allowed observers there to see the comet in the daytime. Hale–Bopp had its closest approach to Earth on March 22, 1997, at a distance of 1.315 au. As it passed perihelion on April 1, 1997, the comet developed into a spectacular sight. It shone brighter than any star in the sky except Sirius, and its dust tail stretched 40–45 degrees across the sky. The comet was visible well before the sky got fully dark each night, and while many great comets are very close to the Sun as they pass perihelion, comet Hale–Bopp was visible all night to Northern Hemisphere observers. After perihelion After its perihelion passage, the comet moved into the southern celestial hemisphere. The comet was much less impressive to southern hemisphere observers than it had been in the northern hemisphere, but southerners could see the comet gradually fade from view during the second half of 1997. The last naked-eye observations were reported in December 1997, which meant that the comet had remained visible without aid for 569 days, or about months. The previous record had been set by the Great Comet of 1811, which was visible to the naked eye for about 9 months. The comet continued to fade as it receded, but was still tracked by astronomers. In October 2007, 10 years after the perihelion and at a distance of 25.7 au from the Sun, the comet was still active as indicated by the detection of the CO-driven coma. Herschel Space Observatory images taken in 2010 suggest comet Hale–Bopp is covered in a fresh frost layer. Hale–Bopp was again detected in December 2010 when it was 30.7 au away from the Sun, and in 2012, at 33.2 au from the Sun. The James Webb Space Telescope observed Hale–Bopp in 2022, when it was 46.2 au from the Sun. Orbital changes The comet likely made its previous perihelion approximately 4,200 years ago, roughly the year 2215 BC. The estimated closest approach to Earth was 1.4 au, and it may have been observed in ancient Egypt during the 6th dynasty reign of the Pharaoh Pepi II (Reign: 2247 – c. 2216 BC). Pepi's pyramid at Saqqara contains a text referring to an "nhh-star" as a companion of the pharaoh in the heavens, where "" is the hieroglyph for long hair. Hale–Bopp may have had a near collision with Jupiter in 2215 BC, which probably caused a dramatic change in its orbit, and 2215 BC may have been its first passage through the inner Solar System from the Oort cloud. The comet's current orbit is almost perpendicular to the plane of the ecliptic, so further close approaches to planets will be rare. However, in April 1996 the comet passed within 0.77 au of Jupiter, close enough for its orbit to be measurably affected by the planet's gravity. The comet's orbit was shortened considerably to a period of roughly 2,399 years, and it will next return to the inner Solar System around the year 4385. Its greatest distance from the Sun (aphelion) will be about 354 au, reduced from about 525 au. The estimated probability of Hale–Bopp's striking Earth in future passages through the inner Solar System is remote, about 2.5×10−9 per orbit. However, given that the comet nucleus is around 60 km in diameter, the consequences of such an impact would be apocalyptic. Weissman conservatively estimates the diameter at 35 km; an estimated density of 0.6 g/cm3 then gives a cometary mass of 1.3×1019 g. At a probable impact velocity of 52.5 km/s, impact energy can be calculated as 1.9×1032 ergs, or 4.4×109 megatons, about 44 times the estimated energy of the K-T impact event. Over many orbits, the cumulative effect of gravitational perturbations on comets with high orbital inclinations and small perihelion distances is generally to reduce the perihelion distance to very small values. Hale–Bopp has about a 15% chance of eventually becoming a sungrazing comet through this process. If such is the case, it could undergo huge mass loss, or break up into smaller pieces like the Kreutz sungrazers. It would also be extremely bright, due to a combination of closeness to the Sun and nuclei size, potentially exceeding Halley's Comet in 837 AD. Scientific results Due to the massive size of its nucleus, Comet Hale–Bopp was observed intensively by astronomers during its perihelion passage, and several important advances in cometary science resulted from these observations. The dust production rate of the comet was very high (up to 2.0 kg/s), which may have made the inner coma optically thick. Based on the properties of the dust grainshigh temperature, high albedo and strong 10 μm silicate emission featurethe astronomers concluded the dust grains are smaller than observed in any other comet. Hale–Bopp showed the highest ever linear polarization detected for any comet. Such polarization is the result of solar radiation getting scattered by the dust particles in the coma of the comet and depends on the nature of the grains. It further confirms that the dust grains in the coma of comet Hale–Bopp were smaller than inferred in any other comet. Sodium tail One of the most remarkable discoveries was that the comet had a third type of tail. In addition to the well-known gas and dust tails, Hale–Bopp also exhibited a faint sodium tail, only visible with powerful instruments with dedicated filters. Sodium emission had been previously observed in other comets, but had not been shown to come from a tail. Hale–Bopp's sodium tail consisted of neutral atoms (not ions), and extended to some 50 million kilometres in length. The source of the sodium appeared to be the inner coma, although not necessarily the nucleus. There are several possible mechanisms for generating a source of sodium atoms, including collisions between dust grains surrounding the nucleus, and "sputtering" of sodium from dust grains by ultraviolet light. It is not yet established which mechanism is primarily responsible for creating Hale–Bopp's sodium tail, and the narrow and diffuse components of the tail may have different origins. While the comet's dust tail roughly followed the path of the comet's orbit and the gas tail pointed almost directly away from the Sun, the sodium tail appeared to lie between the two. This implies that the sodium atoms are driven away from the comet's head by radiation pressure. Deuterium abundance The abundance of deuterium in comet Hale–Bopp in the form of heavy water was found to be about twice that of Earth's oceans. If Hale–Bopp's deuterium abundance is typical of all comets, this implies that although cometary impacts are thought to be the source of a significant amount of the water on Earth, they cannot be the only source. Deuterium was also detected in many other hydrogen compounds in the comet. The ratio of deuterium to normal hydrogen was found to vary from compound to compound, which astronomers believe suggests that cometary ices were formed in interstellar clouds, rather than in the solar nebula. Theoretical modelling of ice formation in interstellar clouds suggests that comet Hale–Bopp formed at temperatures of around 25–45 kelvin. Organics Spectroscopic observations of Hale–Bopp revealed the presence of many organic chemicals, several of which had never been detected in comets before. These complex molecules may exist within the cometary nucleus, or might be synthesised by reactions in the comet. Detection of argon Hale–Bopp was the first comet where the noble gas argon was detected. Noble gases are chemically inert and vary from low to high volatility. Since different noble elements have different sublimation temperatures, and don't interact with other elements, they can be used for probing the temperature histories of the cometary ices. Krypton has a sublimation temperature of 16–20 K and was found to be depleted more than 25 times relative to the solar abundance, while argon with its higher sublimation temperature was enriched relative to the solar abundance. Together these observations indicate that the interior of Hale–Bopp has always been colder than 35–40 K, but has at some point been warmer than 20 K. Unless the solar nebula was much colder and richer in argon than generally believed, this suggests that the comet formed beyond Neptune in the Kuiper belt region and then migrated outward to the Oort cloud. Rotation Comet Hale–Bopp's activity and outgassing were not spread uniformly over its nucleus, but instead came from several specific jets. Observations of the material streaming away from these jets allowed astronomers to measure the rotation period of the comet, which was found to be about 11 hours 46 minutes. Binary nucleus question In 1997 a paper was published that hypothesised the existence of a binary nucleus to fully explain the observed pattern of comet Hale–Bopp's dust emission observed in October 1995. The paper was based on theoretical analysis, and did not claim an observational detection of the proposed satellite nucleus, but estimated that it would have a diameter of about 30 km, with the main nucleus being about 70 km across, and would orbit in about three days at a distance of about 180 km. This analysis was confirmed by observations in 1996 using Wide-Field Planetary Camera 2 of the Hubble Space Telescope which had taken images of the comet that revealed the satellite. Although observations using adaptive optics in late 1997 and early 1998 showed a double peak in the brightness of the nucleus, controversy still exists over whether such observations can only be explained by a binary nucleus. The discovery of the satellite was not confirmed by other observations. Also, while comets have been observed to break up before, no case had been found of a stable binary nucleus until the subsequent discovery of . UFO claims In November 1996, amateur astronomer Chuck Shramek of Houston, Texas took a CCD image of the comet which showed a fuzzy, slightly elongated object nearby. His computer sky-viewing program did not identify the star, so Shramek called the Art Bell radio program Coast to Coast AM to announce that he had discovered a "Saturn-like object" following Hale–Bopp. UFO enthusiasts, such as remote viewing proponent and Emory University political science professor Courtney Brown, soon concluded that there was an alien spacecraft following the comet. Several astronomers, including Alan Hale, stated that the object was simply the 8.5-magnitude star SAO141894. They noted that the star did not appear on Shramek's computer program because the user preferences were set incorrectly. Art Bell claimed to have obtained an image of the object from an anonymous astrophysicist who was about to confirm its discovery. However, astronomers Olivier Hainaut and David Tholen of the University of Hawaii stated that the alleged photo was an altered copy of one of their own comet images. Thirty-nine members of the Heaven's Gate cult died in a mass suicide, in March 1997 with the intention of teleporting to a spaceship which they believed was flying behind the comet. Nancy Lieder, who claims to receive messages from aliens through an implant in her brain, stated that Hale–Bopp was a fiction designed to distract the population from the coming arrival of "Nibiru" or "Planet X", a giant planet whose close passage would disrupt the Earth's rotation, causing global cataclysm. Her original date for the apocalypse was May 2003, which passed without incident, but various conspiracy websites continued to predict the coming of Nibiru, most of whom tied it to the 2012 phenomenon. Lieder and others' claims of the planet Nibiru have been repeatedly debunked by scientists. Legacy Its lengthy period of visibility and extensive coverage in the media meant that Hale–Bopp was probably the most-observed comet in history, making a far greater impact on the general public than the return of Halley's Comet in 1986, and certainly seen by a greater number of people than witnessed any of Halley's previous appearances. For instance, 69% of Americans had seen Hale–Bopp by April 9, 1997. Hale–Bopp was a record-breaking cometthe farthest comet from the Sun discovered by amateurs, with the largest well-measured cometary nucleus known after 95P/Chiron, and it was visible to the naked eye for twice as long as the previous record-holder. It was also brighter than magnitude 0 for eight weeks, longer than any other recorded comet. Carolyn Shoemaker and her husband Gene, co-discoverers of comet Shoemaker–Levy 9, were involved in a car crash after photographing the comet. Gene died in the crash and his ashes were sent to the Moon aboard NASA's Lunar Prospector mission along with an image of Hale–Bopp, "the last comet that the Shoemakers observed together". Composer Dmitry Kayukin created the music album “Comet 97” based on his memories of observing Comet Hale–Bopp.
Physical sciences
Notable comets
Astronomy
7251
https://en.wikipedia.org/wiki/Central%20nervous%20system
Central nervous system
The central nervous system (CNS) is the part of the nervous system consisting primarily of the brain and spinal cord. The CNS is so named because the brain integrates the received information and coordinates and influences the activity of all parts of the bodies of bilaterally symmetric and triploblastic animals—that is, all multicellular animals except sponges and diploblasts. It is a structure composed of nervous tissue positioned along the rostral (nose end) to caudal (tail end) axis of the body and may have an enlarged section at the rostral end which is a brain. Only arthropods, cephalopods and vertebrates have a true brain, though precursor structures exist in onychophorans, gastropods and lancelets. The rest of this article exclusively discusses the vertebrate central nervous system, which is radically distinct from all other animals. Overview In vertebrates, the brain and spinal cord are both enclosed in the meninges. The meninges provide a barrier to chemicals dissolved in the blood, protecting the brain from most neurotoxins commonly found in food. Within the meninges the brain and spinal cord are bathed in cerebral spinal fluid which replaces the body fluid found outside the cells of all bilateral animals. In vertebrates, the CNS is contained within the dorsal body cavity, while the brain is housed in the cranial cavity within the skull. The spinal cord is housed in the spinal canal within the vertebrae. Within the CNS, the interneuronal space is filled with a large amount of supporting non-nervous cells called neuroglia or glia from the Greek for "glue". In vertebrates, the CNS also includes the retina and the optic nerve (cranial nerve II), as well as the olfactory nerves and olfactory epithelium. As parts of the CNS, they connect directly to brain neurons without intermediate ganglia. The olfactory epithelium is the only central nervous tissue outside the meninges in direct contact with the environment, which opens up a pathway for therapeutic agents which cannot otherwise cross the meninges barrier. Structure The CNS consists of two major structures: the brain and spinal cord. The brain is encased in the skull, and protected by the cranium. The spinal cord is continuous with the brain and lies caudally to the brain. It is protected by the vertebrae. The spinal cord reaches from the base of the skull, and continues through or starting below the foramen magnum, and terminates roughly level with the first or second lumbar vertebra, occupying the upper sections of the vertebral canal. White and gray matter Microscopically, there are differences between the neurons and tissue of the CNS and the peripheral nervous system (PNS). The CNS is composed of white and gray matter. This can also be seen macroscopically on brain tissue. The white matter consists of axons and oligodendrocytes, while the gray matter consists of neurons and unmyelinated fibers. Both tissues include a number of glial cells (although the white matter contains more), which are often referred to as supporting cells of the CNS. Different forms of glial cells have different functions, some acting almost as scaffolding for neuroblasts to climb during neurogenesis such as bergmann glia, while others such as microglia are a specialized form of macrophage, involved in the immune system of the brain as well as the clearance of various metabolites from the brain tissue. Astrocytes may be involved with both clearance of metabolites as well as transport of fuel and various beneficial substances to neurons from the capillaries of the brain. Upon CNS injury astrocytes will proliferate, causing gliosis, a form of neuronal scar tissue, lacking in functional neurons. The brain (cerebrum as well as midbrain and hindbrain) consists of a cortex, composed of neuron-bodies constituting gray matter, while internally there is more white matter that form tracts and commissures. Apart from cortical gray matter there is also subcortical gray matter making up a large number of different nuclei. Spinal cord From and to the spinal cord are projections of the peripheral nervous system in the form of spinal nerves (sometimes segmental nerves). The nerves connect the spinal cord to skin, joints, muscles etc. and allow for the transmission of efferent motor as well as afferent sensory signals and stimuli. This allows for voluntary and involuntary motions of muscles, as well as the perception of senses. All in all 31 spinal nerves project from the brain stem, some forming plexa as they branch out, such as the brachial plexa, sacral plexa etc. Each spinal nerve will carry both sensory and motor signals, but the nerves synapse at different regions of the spinal cord, either from the periphery to sensory relay neurons that relay the information to the CNS or from the CNS to motor neurons, which relay the information out. The spinal cord relays information up to the brain through spinal tracts through the final common pathway to the thalamus and ultimately to the cortex. Cranial nerves Apart from the spinal cord, there are also peripheral nerves of the PNS that synapse through intermediaries or ganglia directly on the CNS. These 12 nerves exist in the head and neck region and are called cranial nerves. Cranial nerves bring information to the CNS to and from the face, as well as to certain muscles (such as the trapezius muscle, which is innervated by accessory nerves as well as certain cervical spinal nerves). Two pairs of cranial nerves; the olfactory nerves and the optic nerves are often considered structures of the CNS. This is because they do not synapse first on peripheral ganglia, but directly on CNS neurons. The olfactory epithelium is significant in that it consists of CNS tissue expressed in direct contact to the environment, allowing for administration of certain pharmaceuticals and drugs. Brain At the anterior end of the spinal cord lies the brain. The brain makes up the largest portion of the CNS. It is often the main structure referred to when speaking of the nervous system in general. The brain is the major functional unit of the CNS. While the spinal cord has certain processing ability such as that of spinal locomotion and can process reflexes, the brain is the major processing unit of the nervous system. Brainstem The brainstem consists of the medulla, the pons and the midbrain. The medulla can be referred to as an extension of the spinal cord, which both have similar organization and functional properties. The tracts passing from the spinal cord to the brain pass through here. Regulatory functions of the medulla nuclei include control of blood pressure and breathing. Other nuclei are involved in balance, taste, hearing, and control of muscles of the face and neck. The next structure rostral to the medulla is the pons, which lies on the ventral anterior side of the brainstem. Nuclei in the pons include pontine nuclei which work with the cerebellum and transmit information between the cerebellum and the cerebral cortex. In the dorsal posterior pons lie nuclei that are involved in the functions of breathing, sleep, and taste. The midbrain, or mesencephalon, is situated above and rostral to the pons. It includes nuclei linking distinct parts of the motor system, including the cerebellum, the basal ganglia and both cerebral hemispheres, among others. Additionally, parts of the visual and auditory systems are located in the midbrain, including control of automatic eye movements. The brainstem at large provides entry and exit to the brain for a number of pathways for motor and autonomic control of the face and neck through cranial nerves, Autonomic control of the organs is mediated by the tenth cranial nerve. A large portion of the brainstem is involved in such autonomic control of the body. Such functions may engage the heart, blood vessels, and pupils, among others. The brainstem also holds the reticular formation, a group of nuclei involved in both arousal and alertness. Cerebellum The cerebellum lies behind the pons. The cerebellum is composed of several dividing fissures and lobes. Its function includes the control of posture and the coordination of movements of parts of the body, including the eyes and head, as well as the limbs. Further, it is involved in motion that has been learned and perfected through practice, and it will adapt to new learned movements. Despite its previous classification as a motor structure, the cerebellum also displays connections to areas of the cerebral cortex involved in language and cognition. These connections have been shown by the use of medical imaging techniques, such as functional MRI and Positron emission tomography. The body of the cerebellum holds more neurons than any other structure of the brain, including that of the larger cerebrum, but is also more extensively understood than other structures of the brain, as it includes fewer types of different neurons. It handles and processes sensory stimuli, motor information, as well as balance information from the vestibular organ. Diencephalon The two structures of the diencephalon worth noting are the thalamus and the hypothalamus. The thalamus acts as a linkage between incoming pathways from the peripheral nervous system as well as the optical nerve (though it does not receive input from the olfactory nerve) to the cerebral hemispheres. Previously it was considered only a "relay station", but it is engaged in the sorting of information that will reach cerebral hemispheres (neocortex). Apart from its function of sorting information from the periphery, the thalamus also connects the cerebellum and basal ganglia with the cerebrum. In common with the aforementioned reticular system the thalamus is involved in wakefulness and consciousness, such as though the SCN. The hypothalamus engages in functions of a number of primitive emotions or feelings such as hunger, thirst and maternal bonding. This is regulated partly through control of secretion of hormones from the pituitary gland. Additionally the hypothalamus plays a role in motivation and many other behaviors of the individual. Cerebrum The cerebrum of cerebral hemispheres make up the largest visual portion of the human brain. Various structures combine to form the cerebral hemispheres, among others: the cortex, basal ganglia, amygdala and hippocampus. The hemispheres together control a large portion of the functions of the human brain such as emotion, memory, perception and motor functions. Apart from this the cerebral hemispheres stand for the cognitive capabilities of the brain. Connecting each of the hemispheres is the corpus callosum as well as several additional commissures. One of the most important parts of the cerebral hemispheres is the cortex, made up of gray matter covering the surface of the brain. Functionally, the cerebral cortex is involved in planning and carrying out of everyday tasks. The hippocampus is involved in storage of memories, the amygdala plays a role in perception and communication of emotion, while the basal ganglia play a major role in the coordination of voluntary movement. Difference from the peripheral nervous system The PNS consists of neurons, axons, and Schwann cells. Oligodendrocytes and Schwann cells have similar functions in the CNS and PNS, respectively. Both act to add myelin sheaths to the axons, which acts as a form of insulation allowing for better and faster proliferation of electrical signals along the nerves. Axons in the CNS are often very short, barely a few millimeters, and do not need the same degree of isolation as peripheral nerves. Some peripheral nerves can be over 1 meter in length, such as the nerves to the big toe. To ensure signals move at sufficient speed, myelination is needed. The way in which the Schwann cells and oligodendrocytes myelinate nerves differ. A Schwann cell usually myelinates a single axon, completely surrounding it. Sometimes, they may myelinate many axons, especially when in areas of short axons. Oligodendrocytes usually myelinate several axons. They do this by sending out thin projections of their cell membrane, which envelop and enclose the axon. Development During early development of the vertebrate embryo, a longitudinal groove on the neural plate gradually deepens and the ridges on either side of the groove (the neural folds) become elevated, and ultimately meet, transforming the groove into a closed tube called the neural tube. The formation of the neural tube is called neurulation. At this stage, the walls of the neural tube contain proliferating neural stem cells in a region called the ventricular zone. The neural stem cells, principally radial glial cells, multiply and generate neurons through the process of neurogenesis, forming the rudiment of the CNS. The neural tube gives rise to both brain and spinal cord. The anterior (or 'rostral') portion of the neural tube initially differentiates into three brain vesicles (pockets): the prosencephalon at the front, the mesencephalon, and, between the mesencephalon and the spinal cord, the rhombencephalon. (By six weeks in the human embryo) the prosencephalon then divides further into the telencephalon and diencephalon; and the rhombencephalon divides into the metencephalon and myelencephalon. The spinal cord is derived from the posterior or 'caudal' portion of the neural tube. As a vertebrate grows, these vesicles differentiate further still. The telencephalon differentiates into, among other things, the striatum, the hippocampus and the neocortex, and its cavity becomes the first and second ventricles (lateral ventricles). Diencephalon elaborations include the subthalamus, hypothalamus, thalamus and epithalamus, and its cavity forms the third ventricle. The tectum, pretectum, cerebral peduncle and other structures develop out of the mesencephalon, and its cavity grows into the mesencephalic duct (cerebral aqueduct). The metencephalon becomes, among other things, the pons and the cerebellum, the myelencephalon forms the medulla oblongata, and their cavities develop into the fourth ventricle. Evolution Planaria Planarians, members of the phylum Platyhelminthes (flatworms), have the simplest, clearly defined delineation of a nervous system into a CNS and a PNS. Their primitive brains, consisting of two fused anterior ganglia, and longitudinal nerve cords form the CNS. Like vertebrates, have a distinct CNS and PNS. The nerves projecting laterally from the CNS form their PNS. A molecular study found that more than 95% of the 116 genes involved in the nervous system of planarians, which includes genes related to the CNS, also exist in humans. Arthropoda In arthropods, the ventral nerve cord, the subesophageal ganglia and the supraesophageal ganglia are usually seen as making up the CNS. Arthropoda, unlike vertebrates, have inhibitory motor neurons due to their small size. Chordata The CNS of chordates differs from that of other animals in being placed dorsally in the body, above the gut and notochord/spine. The basic pattern of the CNS is highly conserved throughout the different species of vertebrates and during evolution. The major trend that can be observed is towards a progressive telencephalisation: the telencephalon of reptiles is only an appendix to the large olfactory bulb, while in mammals it makes up most of the volume of the CNS. In the human brain, the telencephalon covers most of the diencephalon and the entire mesencephalon. Indeed, the allometric study of brain size among different species shows a striking continuity from rats to whales, and allows us to complete the knowledge about the evolution of the CNS obtained through cranial endocasts. Mammals Mammals – which appear in the fossil record after the first fishes, amphibians, and reptiles – are the only vertebrates to possess the evolutionarily recent, outermost part of the cerebral cortex (main part of the telencephalon excluding olfactory bulb) known as the neocortex. This part of the brain is, in mammals, involved in higher thinking and further processing of all senses in the sensory cortices (processing for smell was previously only done by its bulb while those for non-smell senses were only done by the tectum). The neocortex of monotremes (the duck-billed platypus and several species of spiny anteaters) and of marsupials (such as kangaroos, koalas, opossums, wombats, and Tasmanian devils) lack the convolutions – gyri and sulci – found in the neocortex of most placental mammals (eutherians). Within placental mammals, the size and complexity of the neocortex increased over time. The area of the neocortex of mice is only about 1/100 that of monkeys, and that of monkeys is only about 1/10 that of humans. In addition, rats lack convolutions in their neocortex (possibly also because rats are small mammals), whereas cats have a moderate degree of convolutions, and humans have quite extensive convolutions. Extreme convolution of the neocortex is found in dolphins, possibly related to their complex echolocation. Clinical significance Diseases There are many CNS diseases and conditions, including infections such as encephalitis and poliomyelitis, early-onset neurological disorders including ADHD and autism, seizure disorders such as epilepsy, headache disorders such as migraine, late-onset neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, and essential tremor, autoimmune and inflammatory diseases such as multiple sclerosis and acute disseminated encephalomyelitis, genetic disorders such as Krabbe's disease and Huntington's disease, as well as amyotrophic lateral sclerosis and adrenoleukodystrophy. Lastly, cancers of the central nervous system can cause severe illness and, when malignant, can have very high mortality rates. Symptoms depend on the size, growth rate, location and malignancy of tumors and can include alterations in motor control, hearing loss, headaches and changes in cognitive ability and autonomic functioning. Specialty professional organizations recommend that neurological imaging of the brain be done only to answer a specific clinical question and not as routine screening.
Biology and health sciences
Nervous system
null
7252
https://en.wikipedia.org/wiki/Cell%20cycle
Cell cycle
The cell cycle, or cell-division cycle, is the sequential series of events that take place in a cell that causes it to divide into two daughter cells. These events include the growth of the cell, duplication of its DNA (DNA replication) and some of its organelles, and subsequently the partitioning of its cytoplasm, chromosomes and other components into two daughter cells in a process called cell division. In eukaryotic cells (having a cell nucleus) including animal, plant, fungal, and protist cells, the cell cycle is divided into two main stages: interphase, and the M phase that includes mitosis and cytokinesis. During interphase, the cell grows, accumulating nutrients needed for mitosis, and replicates its DNA and some of its organelles. During the M phase, the replicated chromosomes, organelles, and cytoplasm separate into two new daughter cells. To ensure the proper replication of cellular components and division, there are control mechanisms known as cell cycle checkpoints after each of the key steps of the cycle that determine if the cell can progress to the next phase. In cells without nuclei the prokaryotes, bacteria and archaea, the cell cycle is divided into the B, C, and D periods. The B period extends from the end of cell division to the beginning of DNA replication. DNA replication occurs during the C period. The D period refers to the stage between the end of DNA replication and the splitting of the bacterial cell into two daughter cells. In single-celled organisms, a single cell-division cycle is how the organism reproduces to ensure its survival. In multicellular organisms such as plants and animals, a series of cell-division cycles is how the organism develops from a single-celled fertilized egg into a mature organism, and is also the process by which hair, skin, blood cells, and some internal organs are regenerated and healed (with possible exception of nerves; see nerve damage). After cell division, each of the daughter cells begin the interphase of a new cell cycle. Although the various stages of interphase are not usually morphologically distinguishable, each phase of the cell cycle has a distinct set of specialized biochemical processes that prepare the cell for initiation of the cell division. Phases The eukaryotic cell cycle consists of four distinct phases: G1 phase, S phase (synthesis), G2 phase (collectively known as interphase) and M phase (mitosis and cytokinesis). M phase is itself composed of two tightly coupled processes: mitosis, in which the cell's nucleus divides, and cytokinesis, in which the cell's cytoplasm and cell membrane divides forming two daughter cells. Activation of each phase is dependent on the proper progression and completion of the previous one. Cells that have temporarily or reversibly stopped dividing are said to have entered a state of quiescence known as G0 phase or resting phase. G0 phase (quiescence) G0 is a resting phase where the cell has left the cycle and has stopped dividing. The cell cycle starts with this phase. Non-proliferative (non-dividing) cells in multicellular eukaryotes generally enter the quiescent G0 state from G1 and may remain quiescent for long periods of time, possibly indefinitely (as is often the case for neurons). This is very common for cells that are fully differentiated. Some cells enter the G0 phase semi-permanently and are considered post-mitotic, e.g., some liver, kidney, and stomach cells. Many cells do not enter G0 and continue to divide throughout an organism's life, e.g., epithelial cells. The word "post-mitotic" is sometimes used to refer to both quiescent and senescent cells. Cellular senescence occurs in response to DNA damage and external stress and usually constitutes an arrest in G1. Cellular senescence may make a cell's progeny nonviable; it is often a biochemical alternative to the self-destruction of such a damaged cell by apoptosis. Interphase Interphase represents the phase between two successive M phases. Interphase is a series of changes that takes place in a newly formed cell and its nucleus before it becomes capable of division again. It is also called preparatory phase or intermitosis. Typically interphase lasts for at least 91% of the total time required for the cell cycle. Interphase proceeds in three stages, G1, S, and G2, followed by the cycle of mitosis and cytokinesis. The cell's nuclear DNA contents are duplicated during S phase. G1 phase (First growth phase or Post mitotic gap phase) The first phase within interphase, from the end of the previous M phase until the beginning of DNA synthesis, is called G1 (G indicating gap). It is also called the growth phase. During this phase, the biosynthetic activities of the cell, which are considerably slowed down during M phase, resume at a high rate. The duration of G1 is highly variable, even among different cells of the same species. In this phase, the cell increases its supply of proteins, increases the number of organelles (such as mitochondria, ribosomes), and grows in size. In G1 phase, a cell has three options. To continue cell cycle and enter S phase Stop cell cycle and enter G0 phase for undergoing differentiation. Become arrested in G1 phase hence it may enter G0 phase or re-enter cell cycle. The deciding point is called check point (Restriction point). This check point is called the restriction point or START and is regulated by G1/S cyclins, which cause transition from G1 to S phase. Passage through the G1 check point commits the cell to division. S phase (DNA replication) The ensuing S phase starts when DNA synthesis commences; when it is complete, all of the chromosomes have been replicated, i.e., each chromosome consists of two sister chromatids. Thus, during this phase, the amount of DNA in the cell has doubled, though the ploidy and number of chromosomes are unchanged. Rates of RNA transcription and protein synthesis are very low during this phase. An exception to this is histone production, most of which occurs during the S phase. G2 phase (growth) G2 phase occurs after DNA replication and is a period of protein synthesis and rapid cell growth to prepare the cell for mitosis. During this phase microtubules begin to reorganize to form a spindle (preprophase). Before proceeding to mitotic phase, cells must be checked at the G2 checkpoint for any DNA damage within the chromosomes. The G2 checkpoint is mainly regulated by the tumor protein p53. If the DNA is damaged, p53 will either repair the DNA or trigger the apoptosis of the cell. If p53 is dysfunctional or mutated, cells with damaged DNA may continue through the cell cycle, leading to the development of cancer. Mitotic phase (chromosome separation) The relatively brief M phase consists of nuclear division (karyokinesis) and division of cytoplasm (cytokinesis). M phase is complex and highly regulated. The sequence of events is divided into phases, corresponding to the completion of one set of activities and the start of the next. These phases are sequentially known as: prophase prometaphase metaphase anaphase telophase Mitosis is the process by which a eukaryotic cell separates the chromosomes in its cell nucleus into two identical sets in two nuclei. During the process of mitosis the pairs of chromosomes condense and attach to microtubules that pull the sister chromatids to opposite sides of the cell. Mitosis occurs exclusively in eukaryotic cells, but occurs in different ways in different species. For example, animal cells undergo an "open" mitosis, where the nuclear envelope breaks down before the chromosomes separate, while fungi such as Aspergillus nidulans and Saccharomyces cerevisiae (yeast) undergo a "closed" mitosis, where chromosomes divide within an intact cell nucleus. Cytokinesis phase (separation of all cell components) Mitosis is immediately followed by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Cytokinesis occurs differently in plant and animal cells. While the cell membrane forms a groove that gradually deepens to separate the cytoplasm in animal cells, a cell plate is formed to separate it in plant cells. The position of the cell plate is determined by the position of a preprophase band of microtubules and actin filaments. Mitosis and cytokinesis together define the division of the parent cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei in a process called endoreplication. This occurs most notably among the fungi and slime molds, but is found in various groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can result in cell death through apoptosis or cause mutations that may lead to cancer. Regulation of eukaryotic cell cycle Regulation of the cell cycle involves processes crucial to the survival of a cell, including the detection and repair of genetic damage as well as the prevention of uncontrolled cell division. The molecular events that control the cell cycle are ordered and directional; that is, each process occurs in a sequential fashion and it is impossible to "reverse" the cycle. Role of cyclins and CDKs Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general, more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many of these genes cdc (for "cell division cycle") followed by an identifying number, e.g. cdc25 or cdc20. Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals. General mechanism of cyclin-CDK interaction Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. Results from a study of E2F transcriptional dynamics at the single-cell level argue that the role of G1 cyclin-CDK activities, in particular cyclin D-CDK4/6, is to tune the timing rather than the commitment of cell cycle entry. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Specific action of cyclin-CDK complexes Cyclin D is the first cyclin produced in the cells that enter the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D levels stay low in resting cells that are not proliferating. Additionally, CDK4/6 and CDK2 are also inactive because CDK4/6 are bound by INK4 family members (e.g., p16), limiting kinase activity. Meanwhile, CDK2 complexes are inhibited by the CIP/KIP proteins such as p21 and p27, When it is time for a cell to enter the cell cycle, which is triggered by a mitogenic stimuli, levels of cyclin D increase. In response to this trigger, cyclin D binds to existing CDK4/6, forming the active cyclin D-CDK4/6 complex. Cyclin D-CDK4/6 complexes in turn mono-phosphorylates the retinoblastoma susceptibility protein (Rb) to pRb. The un-phosphorylated Rb tumour suppressor functions in inducing cell cycle exit and maintaining G0 arrest (senescence). In the last few decades, a model has been widely accepted whereby pRB proteins are inactivated by cyclin D-Cdk4/6-mediated phosphorylation. Rb has 14+ potential phosphorylation sites. Cyclin D-Cdk 4/6 progressively phosphorylates Rb to hyperphosphorylated state, which triggers dissociation of pRB–E2F complexes, thereby inducing G1/S cell cycle gene expression and progression into S phase. Scientific observations from a study have shown that Rb is present in three types of isoforms: (1) un-phosphorylated Rb in G0 state; (2) mono-phosphorylated Rb, also referred to as "hypo-phosphorylated' or 'partially' phosphorylated Rb in early G1 state; and (3) inactive hyper-phosphorylated Rb in late G1 state. In early G1 cells, mono-phosphorylated Rb exists as 14 different isoforms, one of each has distinct E2F binding affinity. Rb has been found to associate with hundreds of different proteins and the idea that different mono-phosphorylated Rb isoforms have different protein partners was very appealing. A later report confirmed that mono-phosphorylation controls Rb's association with other proteins and generates functional distinct forms of Rb. All different mono-phosphorylated Rb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Different mono-phosphorylated forms of Rb have distinct transcriptional outputs that are extended beyond E2F regulation. In general, the binding of pRb to E2F inhibits the E2F target gene expression of certain G1/S and S transition genes including E-type cyclins. The partial phosphorylation of Rb de-represses the Rb-mediated suppression of E2F target gene expression, begins the expression of cyclin E. The molecular mechanism that causes the cell switched to cyclin E activation is currently not known, but as cyclin E levels rise, the active cyclin E-CDK2 complex is formed, bringing Rb to be inactivated by hyper-phosphorylation. Hyperphosphorylated Rb is completely dissociated from E2F, enabling further expression of a wide range of E2F target genes are required for driving cells to proceed into S phase [1]. It has been identified that cyclin D-Cdk4/6 binds to a C-terminal alpha-helix region of Rb that is only distinguishable to cyclin D rather than other cyclins, cyclin E, A and B. This observation based on the structural analysis of Rb phosphorylation supports that Rb is phosphorylated in a different level through multiple Cyclin-Cdk complexes. This also makes feasible the current model of a simultaneous switch-like inactivation of all mono-phosphorylated Rb isoforms through one type of Rb hyper-phosphorylation mechanism. In addition, mutational analysis of the cyclin D- Cdk 4/6 specific Rb C-terminal helix shows that disruptions of cyclin D-Cdk 4/6 binding to Rb prevents Rb phosphorylation, arrests cells in G1, and bolsters Rb's functions in tumor suppressor. This cyclin-Cdk driven cell cycle transitional mechanism governs a cell committed to the cell cycle that allows cell proliferation. A cancerous cell growth often accompanies with deregulation of Cyclin D-Cdk 4/6 activity. The hyperphosphorylated Rb dissociates from the E2F/DP1/Rb complex (which was bound to the E2F responsive genes, effectively "blocking" them from transcription), activating E2F. Activation of E2F results in transcription of various genes like cyclin E, cyclin A, DNA polymerase, thymidine kinase, etc. Cyclin E thus produced binds to CDK2, forming the cyclin E-CDK2 complex, which pushes the cell from G1 to S phase (G1/S, which initiates the G2/M transition). Cyclin B-cdk1 complex activation causes breakdown of nuclear envelope and initiation of prophase, and subsequently, its deactivation causes the cell to exit mitosis. A quantitative study of E2F transcriptional dynamics at the single-cell level by using engineered fluorescent reporter cells provided a quantitative framework for understanding the control logic of cell cycle entry, challenging the canonical textbook model. Genes that regulate the amplitude of E2F accumulation, such as Myc, determine the commitment in cell cycle and S phase entry. G1 cyclin-CDK activities are not the driver of cell cycle entry. Instead, they primarily tune the timing of E2F increase, thereby modulating the pace of cell cycle progression. Inhibitors Endogenous Two families of genes, the cip/kip (CDK interacting protein/Kinase inhibitory protein) family and the INK4a/ARF (Inhibitor of Kinase 4/Alternative Reading Frame) family, prevent the progression of the cell cycle. Because these genes are instrumental in prevention of tumor formation, they are known as tumor suppressors. The cip/kip family includes the genes p21, p27 and p57. They halt the cell cycle in G1 phase by binding to and inactivating cyclin-CDK complexes. p21 is activated by p53 (which, in turn, is triggered by DNA damage e.g. due to radiation). p27 is activated by Transforming Growth Factor β (TGF β), a growth inhibitor. The INK4a/ARF family includes p16INK4a, which binds to CDK4 and arrests the cell cycle in G1 phase, and p14ARF which prevents p53 degradation. Synthetic Synthetic inhibitors of Cdc25 could also be useful for the arrest of cell cycle and therefore be useful as antineoplastic and anticancer agents. Many human cancers possess the hyper-activated Cdk 4/6 activities. Given the observations of cyclin D-Cdk 4/6 functions, inhibition of Cdk 4/6 should result in preventing a malignant tumor from proliferating. Consequently, scientists have tried to invent the synthetic Cdk4/6 inhibitor as Cdk4/6 has been characterized to be a therapeutic target for anti-tumor effectiveness. Three Cdk4/6 inhibitors – palbociclib, ribociclib, and abemaciclib – currently received FDA approval for clinical use to treat advanced-stage or metastatic, hormone-receptor-positive (HR-positive, HR+), HER2-negative (HER2-) breast cancer. For example, palbociclib is an orally active CDK4/6 inhibitor which has demonstrated improved outcomes for ER-positive/HER2-negative advanced breast cancer. The main side effect is neutropenia which can be managed by dose reduction. Cdk4/6 targeted therapy will only treat cancer types where Rb is expressed. Cancer cells with loss of Rb have primary resistance to Cdk4/6 inhibitors. Transcriptional regulatory network Current evidence suggests that a semi-autonomous transcriptional network acts in concert with the CDK-cyclin machinery to regulate the cell cycle. Several gene expression studies in Saccharomyces cerevisiae have identified 800–1200 genes that change expression over the course of the cell cycle. They are transcribed at high levels at specific points in the cell cycle, and remain at lower levels throughout the rest of the cycle. While the set of identified genes differs between studies due to the computational methods and criteria used to identify them, each study indicates that a large portion of yeast genes are temporally regulated. Many periodically expressed genes are driven by transcription factors that are also periodically expressed. One screen of single-gene knockouts identified 48 transcription factors (about 20% of all non-essential transcription factors) that show cell cycle progression defects. Genome-wide studies using high throughput technologies have identified the transcription factors that bind to the promoters of yeast genes, and correlating these findings with temporal expression patterns have allowed the identification of transcription factors that drive phase-specific gene expression. The expression profiles of these transcription factors are driven by the transcription factors that peak in the prior phase, and computational models have shown that a CDK-autonomous network of these transcription factors is sufficient to produce steady-state oscillations in gene expression). Experimental evidence also suggests that gene expression can oscillate with the period seen in dividing wild-type cells independently of the CDK machinery. Orlando et al. used microarrays to measure the expression of a set of 1,271 genes that they identified as periodic in both wild type cells and cells lacking all S-phase and mitotic cyclins (clb1,2,3,4,5,6). Of the 1,271 genes assayed, 882 continued to be expressed in the cyclin-deficient cells at the same time as in the wild type cells, despite the fact that the cyclin-deficient cells arrest at the border between G1 and S phase. However, 833 of the genes assayed changed behavior between the wild type and mutant cells, indicating that these genes are likely directly or indirectly regulated by the CDK-cyclin machinery. Some genes that continued to be expressed on time in the mutant cells were also expressed at different levels in the mutant and wild type cells. These findings suggest that while the transcriptional network may oscillate independently of the CDK-cyclin oscillator, they are coupled in a manner that requires both to ensure the proper timing of cell cycle events. Other work indicates that phosphorylation, a post-translational modification, of cell cycle transcription factors by Cdk1 may alter the localization or activity of the transcription factors in order to tightly control timing of target genes. While oscillatory transcription plays a key role in the progression of the yeast cell cycle, the CDK-cyclin machinery operates independently in the early embryonic cell cycle. Before the midblastula transition, zygotic transcription does not occur and all needed proteins, such as the B-type cyclins, are translated from maternally loaded mRNA. DNA replication and DNA replication origin activity Analyses of synchronized cultures of Saccharomyces cerevisiae under conditions that prevent DNA replication initiation without delaying cell cycle progression showed that origin licensing decreases the expression of genes with origins near their 3' ends, revealing that downstream origins can regulate the expression of upstream genes. This confirms previous predictions from mathematical modeling of a global causal coordination between DNA replication origin activity and mRNA expression, and shows that mathematical modeling of DNA microarray data can be used to correctly predict previously unknown biological modes of regulation. Checkpoints Cell cycle checkpoints are used by the cell to monitor and regulate the progress of the cell cycle. Checkpoints prevent cell cycle progression at specific points, allowing verification of necessary phase processes and repair of DNA damage. The cell cannot proceed to the next phase until checkpoint requirements have been met. Checkpoints typically consist of a network of regulatory proteins that monitor and dictate the progression of the cell through the different stages of the cell cycle. It is estimated that in normal human cells about 1% of single-strand DNA damages are converted to about 50 endogenous DNA double-strand breaks per cell per cell cycle. Although such double-strand breaks are usually repaired with high fidelity, errors in their repair are considered to contribute significantly to the rate of cancer in humans. There are several checkpoints to ensure that damaged or incomplete DNA is not passed on to daughter cells. Three main checkpoints exist: the G1/S checkpoint, the G2/M checkpoint and the metaphase (mitotic) checkpoint. Another checkpoint is the Go checkpoint, in which the cells are checked for maturity. If the cells fail to pass this checkpoint by not being ready yet, they will be discarded from dividing. G1/S transition is a rate-limiting step in the cell cycle and is also known as restriction point. This is where the cell checks whether it has enough raw materials to fully replicate its DNA (nucleotide bases, DNA synthase, chromatin, etc.). An unhealthy or malnourished cell will get stuck at this checkpoint. The G2/M checkpoint is where the cell ensures that it has enough cytoplasm and phospholipids for two daughter cells. But sometimes more importantly, it checks to see if it is the right time to replicate. There are some situations where many cells need to all replicate simultaneously (for example, a growing embryo should have a symmetric cell distribution until it reaches the mid-blastula transition). This is done by controlling the G2/M checkpoint. The metaphase checkpoint is a fairly minor checkpoint, in that once a cell is in metaphase, it has committed to undergoing mitosis. However that's not to say it isn't important. In this checkpoint, the cell checks to ensure that the spindle has formed and that all of the chromosomes are aligned at the spindle equator before anaphase begins. While these are the three "main" checkpoints, not all cells have to pass through each of these checkpoints in this order to replicate. Many types of cancer are caused by mutations that allow the cells to speed through the various checkpoints or even skip them altogether. Going from S to M to S phase almost consecutively. Because these cells have lost their checkpoints, any DNA mutations that may have occurred are disregarded and passed on to the daughter cells. This is one reason why cancer cells have a tendency to exponentially acquire mutations. Aside from cancer cells, many fully differentiated cell types no longer replicate so they leave the cell cycle and stay in G0 until their death. Thus removing the need for cellular checkpoints. An alternative model of the cell cycle response to DNA damage has also been proposed, known as the postreplication checkpoint. Checkpoint regulation plays an important role in an organism's development. In sexual reproduction, when egg fertilization occurs, when the sperm binds to the egg, it releases signalling factors that notify the egg that it has been fertilized. Among other things, this induces the now fertilized oocyte to return from its previously dormant, G0, state back into the cell cycle and on to mitotic replication and division. p53 plays an important role in triggering the control mechanisms at both G1/S and G2/M checkpoints. In addition to p53, checkpoint regulators are being heavily researched for their roles in cancer growth and proliferation. Fluorescence imaging of the cell cycle Pioneering work by Atsushi Miyawaki and coworkers developed the fluorescent ubiquitination-based cell cycle indicator (FUCCI), which enables fluorescence imaging of the cell cycle. Originally, a green fluorescent protein, mAG, was fused to hGem(1/110) and an orange fluorescent protein (mKO2) was fused to hCdt1(30/120). Note, these fusions are fragments that contain a nuclear localization signal and ubiquitination sites for degradation, but are not functional proteins. The green fluorescent protein is made during the S, G2, or M phase and degraded during the G0 or G1 phase, while the orange fluorescent protein is made during the G0 or G1 phase and destroyed during the S, G2, or M phase. A far-red and near-infrared FUCCI was developed using a cyanobacteria-derived fluorescent protein (smURFP) and a bacteriophytochrome-derived fluorescent protein (movie found at this link). Several modifications have been made to the original FUCCI system to improve its usability in several in vitro systems and model organisms. These advancements have increased the sensitivity and accuracy of cell cycle phase detection, enabling more precise assessments of cellular proliferation Role in tumor formation A disregulation of the cell cycle components may lead to tumor formation. As mentioned above, when some genes like the cell cycle inhibitors, RB, p53 etc. mutate, they may cause the cell to multiply uncontrollably, forming a tumor. Although the duration of cell cycle in tumor cells is equal to or longer than that of normal cell cycle, the proportion of cells that are in active cell division (versus quiescent cells in G0 phase) in tumors is much higher than that in normal tissue. Thus there is a net increase in cell number as the number of cells that die by apoptosis or senescence remains the same. The cells which are actively undergoing cell cycle are targeted in cancer therapy as the DNA is relatively exposed during cell division and hence susceptible to damage by drugs or radiation. This fact is made use of in cancer treatment; by a process known as debulking, a significant mass of the tumor is removed which pushes a significant number of the remaining tumor cells from G0 to G1 phase (due to increased availability of nutrients, oxygen, growth factors etc.). Radiation or chemotherapy following the debulking procedure kills these cells which have newly entered the cell cycle. The fastest cycling mammalian cells in culture, crypt cells in the intestinal epithelium, have a cycle time as short as 9 to 10 hours. Stem cells in resting mouse skin may have a cycle time of more than 200 hours. Most of this difference is due to the varying length of G1, the most variable phase of the cycle. M and S do not vary much. In general, cells are most radiosensitive in late M and G2 phases and most resistant in late S phase. For cells with a longer cell cycle time and a significantly long G1 phase, there is a second peak of resistance late in G1. The pattern of resistance and sensitivity correlates with the level of sulfhydryl compounds in the cell. Sulfhydryls are natural substances that protect cells from radiation damage and tend to be at their highest levels in S and at their lowest near mitosis. Homologous recombination (HR) is an accurate process for repairing DNA double-strand breaks. HR is nearly absent in G1 phase, is most active in S phase, and declines in G2/M. Non-homologous end joining, a less accurate and more mutagenic process for repairing double strand breaks, is active throughout the cell cycle. Cell cycle evolution Evolution of the genome The cell cycle must duplicate all cellular constituents and equally partition them into two daughter cells. Many constituents, such as proteins and ribosomes, are produced continuously throughout the cell cycle (except during M-phase). However, the chromosomes and other associated elements like MTOCs, are duplicated just once during the cell cycle. A central component of the cell cycle is its ability to coordinate the continuous and periodic duplications of different cellular elements, which evolved with the formation of the genome. The pre-cellular environment contained functional and self-replicating RNAs. All RNA concentrations depended on the concentrations of other RNAs that might be helping or hindering the gathering of resources. In this environment, growth was simply the continuous production of RNAs. These pre-cellular structures would have had to contend with parasitic RNAs, issues of inheritance, and copy-number control of specific RNAs. Partitioning "genomic" RNA from "functional" RNA helped solve these problems. The fusion of multiple RNAs into a genome gave a template from which functional RNAs were cleaved. Now, parasitic RNAs would have to incorporate themselves into the genome, a much greater barrier, in order to survive. Controlling the copy number of genomic RNA also allowed RNA concentration to be determined through synthesis rates and RNA half-lives, instead of competition. Separating the duplication of genomic RNAs from the generation of functional RNAs allowed for much greater duplication fidelity of genomic RNAs without compromising the production of functional RNAs. Finally, the replacement of genomic RNA with DNA, which is a more stable molecule, allowed for larger genomes. The transition from self-catalysis enzyme synthesis to genome-directed enzyme synthesis was a critical step in cell evolution, and had lasting implications on the cell cycle, which must regulate functional synthesis and genomic duplication in very different ways. Cyclin-dependent kinase and cyclin evolution Cell-cycle progression is controlled by the oscillating concentrations of different cyclins and the resulting molecular interactions from the various cyclin-dependent kinases (CDKs). In yeast, just one CDK (Cdc28 in S. cerevisiae and Cdc2 in S. pombe) controls the cell cycle. However, in animals, whole families of CDKs have evolved. Cdk1 controls entry to mitosis and Cdk2, Cdk4, and Cdk6 regulate entry into S phase. Despite the evolution of the CDK family in animals, these proteins have related or redundant functions. For example, cdk2 cdk4 cdk6 triple knockout mice cells can still progress through the basic cell cycle. cdk1 knockouts are lethal, which suggests an ancestral CDK1-type kinase ultimately controlling the cell cycle. Arabidopsis thaliana has a Cdk1 homolog called CDKA;1, however cdka;1 A. thaliana mutants are still viable, running counter to the opisthokont pattern of CDK1-type kinases as essential regulators controlling the cell cycle. Plants also have a unique group of B-type CDKs, whose functions may range from development-specific functions to major players in mitotic regulation. G1/S checkpoint evolution The G1/S checkpoint is the point at which the cell commits to division through the cell cycle. Complex regulatory networks lead to the G1/S transition decision. Across opisthokonts, there are both highly diverged protein sequences as well as strikingly similar network topologies. Entry into S-phase in both yeast and animals is controlled by the levels of two opposing regulators. The networks regulating these transcription factors are double-negative feedback loops and positive feedback loops in both yeast and animals. Additional regulation of the regulatory network for the G1/S checkpoint in yeast and animals includes the phosphorylation/de-phosphorylation of CDK-cyclin complexes. The sum of these regulatory networks creates a hysteretic and bistable scheme, despite the specific proteins being highly diverged. For yeast, Whi5 must be suppressed by Cln3 phosphorylation for SBF to be expressed, while in animals Rb must be suppressed by the Cdk4/6-cyclin D complex for E2F to be expressed. Both Rb and Whi5 inhibit transcript through the recruitment of histone deacetylase proteins to promoters. Both proteins additionally have multiple CDK phosphorylation sites through which they are inhibited. However, these proteins share no sequence similarity. Studies in A. thaliana extend our knowledge of the G1/S transition across eukaryotes as a whole. Plants also share a number of conserved network features with opisthokonts, and many plant regulators have direct animal homologs. For example, plants also need to suppress Rb for E2F translation in the network. These conserved elements of the plant and animal cell cycles may be ancestral in eukaryotes. While yeast share a conserved network topology with plants and animals, the highly diverged nature of yeast regulators suggests possible rapid evolution along the yeast lineage.
Biology and health sciences
Cellular division
null
7293
https://en.wikipedia.org/wiki/Commodore%2064
Commodore 64
The Commodore 64, also known as the C64, is an 8-bit home computer introduced in January 1982 by Commodore International (first shown at the Consumer Electronics Show, January 7–10, 1982, in Las Vegas). It has been listed in the Guinness World Records as the highest-selling single computer model of all time, with independent estimates placing the number sold between 12.5 and 17 million units. Volume production started in early 1982, marketing in August for . Preceded by the VIC-20 and Commodore PET, the C64 took its name from its of RAM. With support for multicolor sprites and a custom chip for waveform generation, the C64 could create superior visuals and audio compared to systems without such custom hardware. The C64 dominated the low-end computer market (except in the UK, France and Japan, lasting only about six months in Japan) for most of the later years of the 1980s. For a substantial period (1983–1986), the C64 had between 30% and 40% share of the US market and two million units sold per year, outselling IBM PC compatibles, the Apple II, and Atari 8-bit computers. Sam Tramiel, a later Atari president and the son of Commodore's founder, said in a 1989 interview, "When I was at Commodore we were building C64s a month for a couple of years." In the UK market, the C64 faced competition from the BBC Micro, the ZX Spectrum, and later the Amstrad CPC 464, but the C64 was still the second-most-popular computer in the UK after the ZX Spectrum. The Commodore 64 failed to make any impact in Japan, as their market was dominated by Japanese computers, such as the NEC PC-8801, Sharp X1, Fujitsu FM-7 and MSX, and in France, where the ZX Spectrum, Thomson MO5 and TO7, and Amstrad CPC 464 dominated the market. Part of the Commodore 64's success was its sale in regular retail stores instead of only electronics or computer hobbyist specialty stores. Commodore produced many of its parts in-house to control costs, including custom integrated circuit chips from MOS Technology. In the United States, it has been compared to the Ford Model T automobile for its role in bringing a new technology to middle-class households via creative and affordable mass-production. Approximately 10,000 commercial software titles have been made for the Commodore 64, including development tools, office productivity applications, and video games. C64 emulators allow anyone with a modern computer, or a compatible video game console, to run these programs today. The C64 is also credited with popularizing the computer demoscene and is still used today by some computer hobbyists. In 2011, 17 years after it was taken off the market, research showed that brand recognition for the model was still at 87%. History In January 1981, MOS Technology, Inc., Commodore's integrated circuit design subsidiary, initiated a project to design the graphic and audio chips for a next-generation video game console. Design work for the chips, named MOS Technology VIC-II (Video Integrated Circuit for graphics) and MOS Technology SID (Sound Interface Device for audio), was completed in November 1981. Commodore then began a game console project that would use the new chips—called the Ultimax or the MAX Machine, engineered by Yash Terakura from Commodore Japan. This project was eventually cancelled after just a few machines were manufactured for the Japanese market. At the same time, Robert "Bob" Russell (system programmer and architect on the VIC-20) and Robert "Bob" Yannes (engineer of the SID) were critical of the current product line-up at Commodore, which was a continuation of the Commodore PET line aimed at business users. With the support of Al Charpentier (engineer of the VIC-II) and Charles Winterble (manager of MOS Technology), they proposed to Commodore CEO Jack Tramiel a low-cost sequel to the VIC-20. Tramiel dictated that the machine should have of random-access memory (RAM). Although 64-Kbit dynamic random-access memory (DRAM) chips cost over at the time, he knew that 64K DRAM prices were falling and would drop to an acceptable level before full production was reached. The team was able to quickly design the computer because, unlike most other home-computer companies, Commodore had its own semiconductor fab to produce test chips; because the fab was not running at full capacity, development costs were part of existing corporate overhead. The chips were complete by November, by which time Charpentier, Winterble, and Tramiel had decided to proceed with the new computer; the latter set a final deadline for the first weekend of January, to coincide with the 1982 Consumer Electronics Show (CES). The product was code named the VIC-40 as the successor to the popular VIC-20. The team that constructed it consisted of Yash Terakura, Shiraz Shivji, Bob Russell, Bob Yannes, and David A. Ziembicki. The design, prototypes, and some sample software were finished in time for the show, after the team had worked tirelessly over both Thanksgiving and Christmas weekends. The machine used the same case, same-sized motherboard, and same Commodore BASIC 2.0 in ROM as the VIC-20. BASIC also served as the user interface shell and was available immediately on startup at the READY prompt. When the product was to be presented, the VIC-40 product was renamed C64. The C64 made an impressive debut at the January 1982 Consumer Electronics Show, as recalled by Production Engineer David A. Ziembicki: "All we saw at our booth were Atari people with their mouths dropping open, saying, 'How can you do that for $595? The answer was vertical integration; due to Commodore's ownership of MOS Technology's semiconductor fabrication facilities, each C64 had an estimated production cost of (equivalent to $350 in 2022). Reception In July 1983, BYTE magazine stated that "the 64 retails for . At that price it promises to be one of the hottest contenders in the under- personal computer market." It described the SID as "a true music synthesizer ... the quality of the sound has to be heard to be believed", while criticizing the use of Commodore BASIC 2.0, the floppy disk performance which is "even slower than the Atari 810 drive", and Commodore's quality control. BYTE gave more details, saying the C64 had "inadequate Commodore BASIC 2.0. An 8K-byte interpreted BASIC" which they assumed was because "Obviously, Commodore feels that most home users will be running prepackaged software - there is no provision for using graphics (or sound as mentioned above) from within a BASIC program except by means of POKE commands." This was one of very few warnings about C64 BASIC published in any computer magazines. Creative Computing said in December 1984 that the C64 was "the overwhelming winner" in the category of home computers under . Despite criticizing its "slow disk drive, only two cursor directional keys, zero manufacturer support, non-standard interfaces, etc.", the magazine said that at the C64's price of less than "you can't get another system with the same features: 64K, color, sprite graphics, and barrels of available software". The Tandy Color Computer was the runner up. The Apple II was the winner in the category of home computer over , which was the category the Commodore 64 was in when it was first released at the price of . Market war: 1982–1983 Commodore had a reputation for announcing products that never appeared, so sought to quickly ship the C64. Production began in the spring of 1982, and volume shipments began in August. The C64 faced a wide range of competing home computers, but with a lower price and more flexible hardware, it quickly outsold many of its competitors. In the United States, the greatest competitors were the Atari 8-bit computers and the Apple II. The Atari 400 and 800 had been designed to accommodate previously stringent FCC emissions requirements and so were expensive to manufacture. Though similar in specifications, the C64 and Apple II represented differing design philosophies; as an open architecture system, upgrade capability for the Apple II was granted by internal expansion slots, whereas the C64's comparatively closed architecture had only a single external ROM cartridge port for bus expansion. However, the Apple II used its expansion slots for interfacing with common peripherals like disk drives, printers, and modems; the C64 had a variety of ports integrated into its motherboard, which were used for these purposes, usually leaving the cartridge port free. Commodore's was not a completely closed system, however; the company had published detailed specifications for most of their models since the Commodore PET and VIC-20 days, and the C64 was no exception. C64 sales were nonetheless relatively slow due to a lack of software, reliability issues with early production models, particularly high failure rates of the PLA chip, which used a new production process, and a shortage of 1541 disk drives, which also suffered rather severe reliability issues. During 1983, however, a trickle of software turned into a flood and sales began rapidly climbing. Commodore sold the C64 not only through its network of authorized dealers but also through department stores, discount stores, toy stores and college bookstores. The C64 had a built-in RF modulator and thus could be plugged into any television set. This allowed it (like its predecessor, the VIC-20) to compete directly against video game consoles such as the Atari 2600. Like the Apple IIe, the C64 could also output a composite video signal, avoiding the RF modulator altogether. This allowed the C64 to be plugged into a specialized monitor for a sharper picture. Unlike the IIe, the C64's NTSC output capability also included separate luminance/chroma signal output equivalent to (and electrically compatible with) S-Video, for connection to the Commodore 1702 monitor, providing even better video quality than a composite signal. Aggressive pricing of the C64 is considered to have been a major catalyst in the video game crash of 1983. In January 1983, Commodore offered a $100 rebate in the United States on the purchase of a C64 to anyone that traded in another video game console or computer. To take advantage of this rebate, some mail-order dealers and retailers offered a Timex Sinclair 1000 (TS1000) for as little as with the purchase of a C64. This deal meant that the consumer could send the TS1000 to Commodore, collect the rebate, and pocket the difference; Timex Corporation departed the computer market within a year. Commodore's tactics soon led to a price war with the major home computer manufacturers. The success of the VIC-20 and C64 contributed significantly to Texas Instruments and other smaller competitors exiting the field. The price war with Texas Instruments was seen as a personal battle for Commodore president Jack Tramiel. Commodore dropped the C64's list price by within two months of its release. In June 1983 the company lowered the price to (equivalent to $ in ), and some stores sold the computer for . At one point, the company was selling as many C64s as all computers sold by the rest of the industry combined. Meanwhile, TI lost money by selling the TI-99/4A for . TI's subsequent demise in the home computer industry in October 1983 was seen as revenge for TI's tactics in the electronic calculator market in the mid-1970s, when Commodore was almost bankrupted by TI. All four machines had similar memory configurations which were standard in 1982–83: for the Apple II+ (upgraded within months of C64's release to with the Apple IIe) and for the Atari 800. At upwards of , the Apple II was about twice as expensive, while the Atari 800 cost $899. One key to the C64's success was Commodore's aggressive marketing tactics, and they were quick to exploit the relative price/performance divisions between its competitors with a series of television commercials after the C64's launch in late 1982. The company also published detailed documentation to help developers, while Atari initially kept technical information secret. Although many early C64 games were inferior Atari 8-bit ports, by late 1983, the growing installed base caused developers to create new software with better graphics and sound. Rumors spread in late 1983 that Commodore would discontinue the C64, but it was the only non-discontinued, widely available home computer in the US by then, with more than 500,000 sold during the Christmas season; because of production problems in Atari's supply chain, by the start of 1984 "the Commodore 64 largely has [the low-end] market to itself right now", The Washington Post reported. 1984–1987 With sales booming and the early reliability issues with the hardware addressed, software for the C64 began to grow in size and ambition during 1984. This growth shifted to the primary focus of most US game developers. The two holdouts were Sierra, who largely skipped over the C64 in favor of Apple and PC-compatible machines, and Broderbund, who were heavily invested in educational software and developed primarily around the Apple II. In the North American market, the disk format had become nearly universal while cassette and cartridge-based software all but disappeared. Most US-developed games by this point grew large enough to require multi-loading from disk. At a mid-1984 conference of game developers and experts at Origins Game Fair, Dan Bunten, Sid Meier, and a representative of Avalon Hill said that they were developing games for the C64 first as the most promising market. By 1985, games were an estimated 60 to 70% of Commodore 64 software. Computer Gaming World stated in January 1985 that companies such as Epyx that survived the video game crash did so because they "jumped on the Commodore bandwagon early". Over 35% of SSI's 1986 sales were for the C64, ten points higher than for the Apple II. The C64 was even more important for other companies, which often found that more than half the sales for a title ported to six platforms came from the C64 version. That year, Computer Gaming World published a survey of ten game publishers that found that they planned to release forty-three Commodore 64 games that year, compared to nineteen for Atari and forty-eight for Apple II, and Alan Miller stated that Accolade developed first for the C64 because "it will sell the most on that system". In Europe, the primary competitors to the C64 were British-built computers: the Sinclair ZX Spectrum, the BBC Micro, and the Amstrad CPC 464. In the UK, the 48K Spectrum had not only been released a few months ahead of the C64's early 1983 debut, but it was also selling for £175, less than half the C64's £399 price. The Spectrum quickly became the market leader and Commodore had an uphill struggle against it in the marketplace. The C64 did however go on to rival the Spectrum in popularity in the latter half of the 1980s. Adjusted to the population size, the popularity of Commodore 64 was the highest in Finland at roughly 3 units per 100 inhabitants, where it was subsequently marketed as "the Computer of the Republic". By early 1985 the C64's price was ; with an estimated production cost of , its profitability was still within the industry-standard markup of two to three times. Commodore sold about one million C64s in 1985 and a total of 3.5 million by mid-1986. Although the company reportedly attempted to discontinue the C64 more than once in favor of more expensive computers such as the Commodore 128, demand remained strong. In 1986, Commodore introduced the 64C, a redesigned 64, which Compute! saw as evidence that—contrary to C64 owners' fears that the company would abandon them in favor of the Amiga and 128—"the 64 refuses to die". Its introduction also meant that Commodore raised the price of the C64 for the first time, which the magazine cited as the end of the home-computer price war. Software sales also remained strong; MicroProse, for example, in 1987 cited the Commodore and IBM PC markets as its top priorities. 1988–1994 By 1988, PC compatibles were the largest and fastest-growing home and entertainment software markets, displacing former leader Commodore. Commodore 64 software sales were almost unchanged in the third quarter of 1988 year over year while the overall market grew 42%, but the company was still selling 1 to 1.5 million units worldwide each year of what Computer Chronicles that year called "the Model T of personal computers". Epyx CEO Dave Morse cautioned that "there are no new 64 buyers, or very few. It's a consistent group that's not growing... it's going to shrink as part of our business." One computer gaming executive stated that the Nintendo Entertainment System's enormous popularityseven million sold in 1988, almost as many as the number of C64s sold in its first five yearshad stopped the C64's growth. Trip Hawkins reinforced that sentiment, stating that Nintendo was "the last hurrah of the 8-bit world". SSI exited the Commodore 64 market in 1991, after most competitors. Ultima VI, released in 1991, was the last major C64 game release from a North American developer, and The Simpsons, published by Ultra Games, was the last arcade conversion. The latter was a somewhat uncommon example of a US-developed arcade port as after the early years of the C64, most arcade conversions were produced by UK developers and converted to NTSC and disk format for the US market, American developers instead focusing on more computer-centered game genres such as RPGs and simulations. In the European market, disk software was rarer and cassettes were the most common distribution method; this led to a higher prevalence of arcade titles and smaller, lower-budget games that could fit entirely in the computer's memory without requiring multiloads. European programmers also tended to exploit advanced features of the C64's hardware more than their US counterparts. The Commodore 64 Light Fantastic pack was release in time for the 1989 Christmas holiday season. The package included a C64C, a Cheetah Defender 64 Light gun and 3D-glasses. This pack included several games compatible with the light gun, including some developed purely for the packs release (Mindscape.) In the United States, demand for 8-bit computers all but ceased as the 1990s began and PC compatibles completely dominated the computer market. However, the C64 continued to be popular in the UK and other European countries. The machine's eventual demise was not due to lack of demand or the cost of the C64 itself (still profitable at a retail price point between £44 and £50), but rather because of the cost of producing the disk drive. In March 1994, at CeBIT in Hanover, Germany, Commodore announced that the C64 would be finally discontinued in 1995, noting that the Commodore 1541 cost more than the C64 itself. However, only one month later in April 1994, the company filed for bankruptcy. When Commodore went bankrupt, all production on their inventory, including the C64, was discontinued, thus ending the C64's -year production. Claims of sales of 17, 22 and 30 million of C64 units sold worldwide have been made. Company sales records, however, indicate that the total number was about 12.5 million. Based on that figure, the Commodore 64 was still the third most popular computing platform into the 21st century until 2017 when the Raspberry Pi family replaced it. While 360,000 C64s were sold in 1982, about 1.3 million were sold in 1983, followed by a large spike in 1984 when 2.6 million were sold. After that, sales held steady at between 1.3 and 1.6 million a year for the remainder of the decade and then dropped off after 1989. North American sales peaked between 1983 and 1985 and gradually tapered off afterward, while European sales remained quite strong into the early 1990s. Commodore itself reported a robust sales figure of over 800,000 units during the 1991 fiscal year, but sales during the 1993 fiscal year had declined to fewer than 200,000 units. Throughout the early 1990s, European sales had accounted for more than 80% of Commodore's total sales revenue. C64 family Commodore MAX In 1982, Commodore released the MAX Machine in Japan. It was called the Ultimax in the United States and VC-10 in Germany. The MAX was intended to be a game console with limited computing capability and was based on a cut-down version of the hardware family later used in the C64. The MAX was discontinued months after its introduction because of poor sales in Japan. Commodore Educator 64 1983 saw Commodore attempt to compete with the Apple II's hold on the US education market with the Educator 64, essentially a C64 and "green" monochrome monitor in a PET case. Schools preferred the all-in-one metal construction of the PET over the standard C64's separate components, which could be easily damaged, vandalized, or stolen. Schools did not prefer the Educator 64 to the wide range of software and hardware options the Apple IIe was able to offer, and it was produced in limited quantities. SX-64 Also in 1983, Commodore released the SX-64, a portable version of the C64. The SX-64 has the distinction of being the first commercial full-color portable computer. While earlier computers using this form factor only incorporate monochrome ("green screen") displays, the base SX-64 unit features a color cathode-ray tube (CRT) and one integrated 1541 floppy disk drive. Even though Commodore claimed in advertisements that it would have dual 1541 drives, when the SX-64 was released there was only one and the other became a floppy disk storage slot. Also, unlike most other C64s, the SX-64 does not have a datasette connector so an external cassette was not an option. Commodore 128 Two designers at Commodore, Fred Bowen and Bil Herd, were determined to rectify the problems of the Plus/4. They intended that the eventual successors to the C64—the Commodore 128 and 128D computers (1985)—were to build upon the C64, avoiding the Plus/4's flaws. The successors had many improvements such as a BASIC with graphics and sound commands (like almost all home computers not made by Commodore), 80-column display ability, and full CP/M compatibility. The decision to make the Commodore 128 plug compatible with the C64 was made quietly by Bowen and Herd, software and hardware designers respectively, without the knowledge or approval by the management in the post Jack Tramiel era. The designers were careful not to reveal their decision until the project was too far along to be challenged or changed and still make the impending Consumer Electronics Show (CES) in Las Vegas. Upon learning that the C128 was designed to be compatible with the C64, Commodore's marketing department independently announced that the C128 would be 100% compatible with the C64, thereby raising the bar for C64 support. In a case of malicious compliance, the 128 design was altered to include a separate "64 mode" using a complete C64 environment to try to ensure total compatibility. Commodore 64C The C64's designers intended the computer to have a new, wedge-shaped case within a year of release, but the change did not occur. In 1986, Commodore released the 64C computer, which is functionally identical to the original. The exterior design was remodeled in the sleeker style of the Commodore 128. The 64C uses new versions of the SID, VIC-II, and I/O chips being deployed. Models with the C64E board had the graphic symbols printed on the top of the keys, instead of the normal location on the front. The sound chip (SID) was changed to use the MOS 8580 chip, with the core voltage reduced from 12V to 9V. The most significant changes include different behavior in the filters and in the volume control, which result in some music/sound effects sounding differently than intended, and in digitally-sampled audio being almost inaudible, respectively (though both of these can mostly be corrected-for in software). The 64 KB RAM memory went from eight chips to two chips. BASIC and the KERNAL went from two separate chips into one 16 KB ROM chip. The PLA chip and some TTL chips were integrated into a DIL 64-pin chip. The "252535-01" PLA integrated the color RAM as well into the same chip. The smaller physical space made it impossible to put in some internal expansions like a floppy-speeder. In the United States, the 64C was often bundled with the third-party GEOS graphical user interface (GUI)-based operating system, as well as the software needed to access Quantum Link. The 1541 drive received a matching face-lift, resulting in the 1541C. Later, a smaller, sleeker 1541-II model was introduced, along with the 3.5-inch microfloppy 1581. Commodore 64 Games System In 1990, the C64 was repackaged in the form of a game console, called the C64 Games System (C64GS), with most external connectivity removed. A simple modification to the 64C's motherboard was made to allow cartridges to be inserted from above. A modified ROM replaced the BASIC interpreter with a boot screen to inform the user to insert a cartridge. Designed to compete with the Nintendo Entertainment System and Sega's Master System, it suffered from very low sales compared to its rivals. It was another commercial failure for Commodore, and it was never released outside Europe. The Commodore game system lacked a keyboard, so any software that required a keyboard could not be used. Commodore 65 In 1990, an advanced successor to the C64, the Commodore 65 (also known as the "C64DX"), was prototyped, but the project was canceled by Commodore's chairman Irving Gould in 1991. The C65's specifications were impressive for an 8-bit computer, bringing specs comparable to the 16-bit Apple IIGS. For example, it could display 256 colors on the screen, while OCS based Amigas could only display 64 in HalfBrite mode (32 colors and half-bright transformations). Although no specific reason was given for the C65's cancellation, it would have competed in the marketplace with Commodore's lower-end Amigas and the Commodore CDTV. Software In 1982, the C64's graphics and sound capabilities were rivaled only by the Atari 8-bit computers and appeared exceptional when compared with the popular Apple II. The C64 is often credited with starting the demoscene subculture (see Commodore 64 demos). It is still being actively used in the demoscene, especially for music (its SID sound chip even being used in special sound cards for PCs, and the Elektron SidStation synthesizer). Even though other computers quickly caught up with it, the C64 remained a strong competitor to the later video game consoles Nintendo Entertainment System (NES) and Master System, thanks in part to its by-then established software base, especially outside North America, where it comprehensively outsold the NES. Because of lower incomes and the domination of the ZX Spectrum in the UK, almost all British C64 software used cassette tapes. Few cassette C64 programs were released in the US after 1983 and, in North America, the diskette was the principal method of software distribution. The cartridge slot on the C64 was also mainly a feature used in the computer's first two years on the US market and became rapidly obsolete once the price and reliability of 1541 drives improved. A handful of PAL region games used bank switched cartridges to get around the 16 KB memory limit. BASIC As is common for home computers of the early 1980s, the C64 comes with a BASIC interpreter, in ROM. KERNAL, I/O, and tape/disk drive operations are accessed via custom BASIC language commands. The disk drive has its own interfacing microprocessor and ROM (firmware) I/O routines, much like the earlier CBM/PET systems and the Atari 400 and Atari 800. This means that no memory space is dedicated to running a disk operating system, as was the case with earlier systems such as the Apple II and TRS-80. Commodore BASIC 2.0 is used instead of the more advanced BASIC 4.0 from the PET series, since C64 users were not expected to need the disk-oriented enhancements of BASIC 4.0. The company did not expect many to buy a disk drive, and using BASIC 2.0 simplified VIC-20 owners' transition to the 64. "The choice of BASIC 2.0 instead of 4.0 was made with some soul-searching, not just at random. The typical user of a C64 is not expected to need the direct disk commands as much as other extensions, and the amount of memory to be committed to BASIC were to be limited. We chose to leave expansion space for color and sound extensions instead of the disk features. As a result, you will have to handle the disk in the more cumbersome manner of the 'old days'." The version of Microsoft BASIC is not very comprehensive and does not include specific commands for sound or graphics manipulation, instead requiring users to use the "PEEK and POKE" commands to access the graphics and sound chip registers directly. To provide extended commands, including graphics and sound, Commodore produced two different cartridge-based extensions to BASIC 2.0: Simons' BASIC and Super Expander 64. Other languages available for the C64 include Pascal, C, Logo, Forth, and FORTRAN. Compilers for BASIC 2.0 such as Petspeed 2 (from Commodore), Blitz (from Jason Ranheim), and Turbo Lightning (from Ocean Software) were produced. Most commercial C64 software was written in assembly language, either cross-developed on a larger computer, or directly on the C64 using a machine code monitor or an assembler. This maximized speed and minimized memory use. Some games, particularly adventures, used high-level scripting languages and sometimes mixed BASIC and machine language. Alternative operating systems Many third-party operating systems have been developed for the C64. As well as the original GEOS, two third-party GEOS-compatible systems have been written: Wheels and GEOS megapatch. Both of these require hardware upgrades to the original C64. Several other operating systems are or have been available, including WiNGS OS, the Unix-like LUnix, operated from a command-line, and the embedded systems OS Contiki, with full GUI. Other less well-known OSes include ACE, Asterix, DOS/65, and GeckOS. C64 OS is commercially available today and under active development. It features a full GUI in character mode, and many other modern features. A version of CP/M was released, but this requires the addition of an external Z80 processor to the expansion bus. Furthermore, the Z80 processor is underclocked to be compatible with the C64's memory bus, so performance is poor compared to other CP/M implementations. C64 CP/M and C128 CP/M both suffer a lack of software; although most commercial CP/M software can run on these systems, software media is incompatible between platforms. The low usage of CP/M on Commodores means that software houses saw no need to invest in mastering versions for the Commodore disk format. The C64 CP/M cartridge is also not compatible with anything except the early 326298 motherboards. Networking software During the 1980s, the Commodore 64 was used to run bulletin board systems using software packages such as Punter BBS, Bizarre 64, Blue Board, C-Net, Color 64, CMBBS, C-Base, DMBBS, Image BBS, EBBS, and The Deadlock Deluxe BBS Construction Kit, often with sysop-made modifications. These boards sometimes were used to distribute cracked software. As late as December 2013, there were 25 such Bulletin Board Systems in operation, reachable via the Telnet protocol. There were major commercial online services, such as Compunet (UK), CompuServe (US later bought by America Online), The Source (US), and Minitel (France) among many others. These services usually required custom software which was often bundled with a modem and included free online time as they were billed by the minute. Quantum Link (or Q-Link) was a US and Canadian online service for Commodore 64 and 128 personal computers that operated from November 5, 1985, to November 1, 1994. It was operated by Quantum Computer Services of Vienna, Virginia, which in October 1991 changed its name to America Online and continued to operate its AOL service for the IBM PC compatible and Apple Macintosh. Q-Link was a modified version of the PlayNET system, which Control Video Corporation (CVC, later renamed Quantum Computer Services) licensed. Online gaming The first graphical character-based interactive environment is Club Caribe. First released as Habitat in 1988, Club Caribe was introduced by LucasArts for Q-Link customers on their Commodore 64 computers. Users could interact with one another, chat and exchange items. Although the game's open world was very basic, its use of online avatars and the combination of chat and graphics was revolutionary. Online graphics in the late 1980s were severely restricted by the need to support modem data transfer rates as low as 300 bits per second. Habitat's graphics were stored locally on floppy disk, eliminating the need for network transfer. Hardware CPU and memory The C64 uses an 8-bit MOS Technology 6510 microprocessor that is almost identical to the 6502 but has three-state buses, a different pinout, slightly different clock signals and other minor changes for this application. It also has six I/O lines on otherwise-unused legs on the 40-pin IC package. These are used for two purposes in the C64: to bank-switch the machine's read-only memory (ROM) in and out of the processor's address space, and to operate the datasette tape recorder. The C64 has of 8-bit-wide dynamic RAM, of 4-bit-wide static color RAM for text mode, and are available to built-in Commodore BASIC 2.0 on startup. There is of ROM, made up of the BASIC interpreter, the KERNAL, and the character ROM. Because the processor can only address at a time, the ROM was mapped into memory and only of RAM (plus between the ROMs) were available at startup. Most "breadbin" Commodore 64s used 4164 DRAM with eight chips totaling 64K of system RAM. Later models, featuring Assy 250466 and Assy 250469 motherboards, used 41464 DRAM (64K×4) chips which stored per chip (so only two were required). Because 4164 DRAMs are 64K×1, eight chips are needed to make an entire byte; the computer will not function without all of them present. The first chip contains Bit 0 for the memory space, the second chip contains Bit 1, and so forth. The C64 performs a RAM test on power-up and if a RAM error is detected, the amount of free BASIC memory will be lower than the normal 38,911. If the faulty chip is in lower memory, then an ?OUT OF MEMORY IN 0 error is displayed rather than the usual BASIC startup banner. The C64 uses a complicated memory-banking scheme; the normal power-on default is the BASIC ROM mapped in at -, and the screen editor (KERNAL) ROM at –. RAM under the system ROMs can be written to, but not read back, without swapping out the ROMs. Memory location contains a register with control bits for enabling or disabling the system ROMs and the I/O area at . If the KERNAL ROM is swapped out, BASIC will be removed at the same time. BASIC is not active without the KERNAL; BASIC often calls KERNAL routines, and part of the ROM code for BASIC is in the KERNAL ROM. The character ROM is normally invisible to the CPU. The character ROM may be mapped into –, where it is then visible to the CPU. Because doing so necessitates swapping out the I/O registers, interrupts must first be disabled. By removing I/O from the memory map, – becomes free RAM. C64 cartridges map into assigned ranges in the CPU's address space. The most common cartridge auto-starting requires a string at which contains "" followed by the address where program execution begins. A few C64 cartridges released in 1982 use Ultimax mode (or MAX mode), a leftover feature of the unsuccessful MAX Machine. These cartridges map into and displace the KERNAL ROM. If Ultimax mode is used, the programmer will have to provide code for handling system interrupts. The cartridge port has 16 address lines, which grants access to the computer's entire address space if needed. Disk and tape software normally load at the start of BASIC memory ($0801), and use a small BASIC stub (such as 10 SYS(2064)) to jump to the start of the program. Although no Commodore 8-bit machine except the C128 can automatically boot from a floppy disk, some software intentionally overwrites certain BASIC vectors in the process of loading so execution begins automatically (instead of requiring the user to type RUN at the BASIC prompt after loading). About 300 cartridges were released for the C64, primarily during the machine's first years on the market, after which most software outgrew the cartridge limit. Larger software companies, such as Ocean Software, began releasing games on bank-switched cartridges to overcome the cartridge limit during the C64's final years. Commodore did not include a reset button on its computers until the CBM-II line, but third-party cartridges had a reset button. A soft reset can be triggered by jumping to the CPU reset routine at (64738). A few programs use this as an exit feature, although it does not clear memory. The KERNAL ROM underwent three revisions, mainly designed to fix bugs. The initial version is only found on 326298 motherboards (used in the first production models), and cannot detect whether an NTSC or PAL VIC-II is present. The second revision is found on all C64s made from late 1982 through 1985. The final KERNAL ROM revision was introduced on the 250466 motherboard (late breadbin models with 41464 RAM), and is found in all C64Cs. The 6510 CPU is clocked at (NTSC) and (PAL), lower than some competing systems; the Atari 800, for example, is clocked at ). Performance can be boosted slightly by disabling the VIC-II's video output via a register write. This feature is often used by tape and disk fast loaders and the KERNAL cassette routine to keep a standard CPU cycle timing not modified by the VIC-II's sharing of the bus. The restore key is gated directly to the CPU's NMI line, and will generate an NMI if pressed. The KERNAL handler for the NMI checks if run/stop is also pressed; if not, it ignores the NMI and exits. Run/stop-restore is normally a soft reset in BASIC which restores all I/O registers to their power-on default state, but does not clear memory or reset pointers; any BASIC programs in memory will be left untouched. Machine-language software usually disables run/stop-restore by remapping the NMI vector to a dummy RTI instruction. The NMI can also be used for an extra interrupt thread by programs, but risks a system lockup or other undesirable side effects if the restore key is accidentally pressed (which activates the NMI thread). Joysticks, mice, and paddles The C64 retained the VIC-20's DE-9 Atari joystick port and added another; any Atari-specification game controller can be used on a C64. The joysticks are read from the registers at and , and most software is designed to use a joystick in port 2 for control rather than port 1; the upper bits of are used by the keyboard, and an I/O conflict can result. Although it is possible to use Sega gamepads on a C64, it is not recommended; their slightly different signal can damage the CIA chip. The SID chip's register , used to control paddles, is an analog input. A handful of games, primarily released early in the computer's life cycle, can use paddles. In 1986, Commodore released two mice for the C64 and C128: the 1350 and 1351. The 1350 is a digital device read from the joystick registers, and can be used with any program supporting joystick input. The 1351 is an analog potentiometer-based mouse, read with the SID's analog-to-digital converter. Graphics The VIC-II graphics chip features a new palette, eight hardware sprites per scanline (enabling up to 112 sprites per PAL screen), scrolling capabilities, and two bitmap graphics modes. Text modes The standard text mode features 40 columns, like most Commodore PET models; the built-in character encoding is not standard ASCII but PETSCII, an extended form of ASCII-1963. The KERNAL ROM sets the VIC-II to a dark-blue background on power-up, with a light-blue border and text. Unlike the PET and VIC-20, the C64 uses double-width text; some early VIC-IIs had poor video quality which resulted in a fuzzy picture. Most screenshots show borders around the screen, a feature of the VIC-II chip. By utilizing interrupts to reset hardware registers with precise timing, it was possible to place graphics within the borders and use the full screen. The C64 has a resolution of 320×200 pixels, consisting of a 40×25 grid of 8×8 character blocks. It has 255 predefined character blocks, known as PETSCII. The character set can be copied into RAM and modified by a programmer. There are two color modes: high resolution, with two colours available per character block (one foreground and one background), and multicolour (four colors per character blockthree foreground and one background). In multicolor mode, attributes are shared between pixel pairs so the effective visible resolution is 160×200 pixels; only 16 KB of memory is available for the VIC-II video processor. Since the C64 has a bitmapped screen, it is possible (but slow) to draw each pixel individually. Most programmers used techniques developed for earlier, non-bitmapped systems like the Commodore PET and TRS-80. A programmer redraws the character set, and the video processor fills the screen block by block from the top left corner to the bottom right corner. Two types of animation are used: character block animation and hardware sprites. Character block animation The user draws a series of characters of a person walking, possibly two in the middle of the block and another two walking in and out of the block. Then the user sequences them so the character walks into the block and out again. Drawing a series of these gets a person walking across the screen. By timing the redraw to occur when the television screen blanks out to restart drawing the screen, there will be no flicker. For this to happen, a user programs the VIC-II that it generates a raster interrupt when video flyback occurs. This technique is used in the Space Invaders arcade game. Horizontal and vertical pixel scrolling of up to one character block is supported by two hardware scroll registers. Depending on timing, hardware scrolling affects the entire screen or selected lines of character blocks. On a non-emulated C64, scrolling is glass-like and blur-free. Hardware sprites A sprite is a character which moves over an area of the screen, draws over the background, and redraws it after it moves. This differs from character block animation, where the user flips character blocks. On the C64, the VIC-II video controller handles most sprite emulation; the programmer defines the sprite and where it goes. The C64 has two types of sprites, respecting their color-mode limitations. Hi-res sprites have one color (one background and one foreground), and multi-color sprites have three (one background and three foreground). Color modes can be split or windowed on a single screen. Sprites can be doubled in size vertically and horizontally up to four times their size, but the pixel attributes are the same – the pixels become "fatter". There are eight sprites, and all eight can be shown in each horizontal line concurrently. Sprites can move with glassy smoothness in front of, and behind, screen characters and other sprites. The hardware sprites of a C64 can be displayed on a bitmapped (high-resolution) screen or a text-mode screen in conjunction with fast and smooth character block animation. Software-emulated sprites on systems without support for hardware sprites, such as the Apple II and ZX Spectrum, required a bitmapped screen. Sprite-sprite and sprite-background collisions are detected in hardware, and the VIC-II can be programmed to trigger an interrupt accordingly. Sound The SID chip has three channels, each with its own ADSR envelope generator and filter capabilities. Ring modulation makes use of channel three to work with the other two channels. Bob Yannes developed the SID chip and, later, co-founded the synthesizer company Ensoniq. Composers and programmers of game music on the C64 include Rob Hubbard, Jeroen Tel, Tim Follin, David Whittaker, Chris Hülsbeck, Ben Daglish, Martin Galway, Kjell Nordbø and David Dunn. Due to the chip's three channels, chords are often played as arpeggios. It was also possible to continuously update the master volume with sampled data to enable the playback of 4-bit digitized audio. By 2008, it was possible to play four-channel 8-bit audio samples and two SID channels and still use filtering. There are two versions of the SID chip: the 6581 and the 8580. The MOS Technology 6581 was used in the original ("breadbin") C64s, the early versions of the 64C, and the Commodore 128. The 6581 was replaced with the MOS Technology 8580 in 1987. Although the 6581 sound quality is a little crisper, it lacks the 8580's versatility; the 8580 can mix all available waveforms on each channel, but the 6581 can only mix waveforms in a channel in a limited fashion. The main difference between the 6581 and the 8580 is the supply voltage; the 6581 requires , and the 8580 . A modification can be made to use the 6581 in a newer 64C board (which uses the chip). In 1986, the Sound Expander was released for the Commodore 64. It was a sound module with a Yamaha YM3526 chip capable of FM synthesis, primarily intended for professional music production. Revisions Commodore made many changes to the C64's hardware, sometimes introducing compatibility issues. The computer's rapid development and Commodore and Jack Tramiel's focus on cost-cutting instead of product testing resulted in several defects which caused developers like Epyx to complain and required many revisions; Charpentier said that "not coming a little close to quality" was one of the company's mistakes. Cost reduction was the reason for most of the revisions. Reducing manufacturing costs was vitally important to Commodore's survival during the price war and lean years of the 16-bit era. The C64's original (NMOS-based) motherboard went through two major redesigns and a number of revisions, exchanging positions of the VIC-II, SID and PLA chips. Much of the cost was initially eliminated by reducing the number of discrete components, such as diodes and resistors, which enabled a smaller printed circuit board. There were 16 C64 motherboard revisions to simplify production and reduce manufacturing costs. Some board revisions were exclusive to PAL regions. All C64 motherboards were manufactured in Hong Kong. IC locations changed frequently with each motherboard revision, as did the presence (or lack) of the metal RF shield around the VIC-II; PAL boards often had aluminized cardboard instead of a metal shield. The SID and VIC-II are socketed on all boards, but the other ICs may be socketed or soldered. The first production C64s, made from 1982 to early 1983, are known as "silver label" models due to the case having a silver-colored "Commodore" logo. The power LED had a silver badge reading "64" around it. These machines have only a five-pin video cable, and cannot produce S-Video. Commodore introduced the familiar "rainbow badge" case in late 1982, but many machines produced into early 1983 also used silver-label cases until the existing stock was used up. The original 326298 board was replaced in spring 1983 by the 250407 motherboard, which had an eight-pin video connector and added S-Video support. This case design was used until the C64C appeared in 1986. All ICs switched to plastic shells, but the silver-label C64s (notably the VIC-II) had some ceramic ICs. The case is made from ABS plastic, which may become brown with time; this can be reversed with retrobright. ICs The VIC-II was manufactured with 5-micrometer NMOS technology, and was clocked at (PAL) or (NTSC). Internally, the clock was divided to generate the dot clock (about 8 MHz) and the two-phase system clocks (about 1 MHz; the pixel and system clock speeds differ slightly on NTSC and PAL machines). At such high clock rates the chip generated considerable heat, forcing MOS Technology to use a ceramic dual in-line package known as a CERDIP. The ceramic package was more expensive, but dissipated heat more effectively than plastic. After a redesign in 1983, the VIC-II was encased in a plastic dual in-line package; this reduced costs substantially, but did not eliminate the heat problem. Without a ceramic package, the VIC-II required a heat sink. To avoid extra cost, the metal RF shielding doubled as the VIC's heat sink; not all units shipped with this type of shielding, however. Most C64s in Europe shipped with a cardboard RF shield coated with a layer of metal foil. The effectiveness of the cardboard was questionable; it acted instead as an insulator, blocking airflow and trapping heat generated by the SID, VIC, and PLA chips. The SID was originally manufactured using NMOS at 7 micrometers and, in some areas, 6 micrometers. The prototype SID and some early production models had a ceramic dual in-line package, but (unlike the VIC-II) are very rare; the SID was encased in plastic when production began in early 1982. Motherboard In 1986, Commodore released the last revision of the classic C64 motherboard. It was otherwise identical to the 1984 design, except for two 64-kilobit × 4-bit DRAM chips which replaced the original eight 64-kilobit × 1-bit ICs. After the release of the Commodore 64C, MOS Technology began to reconfigure the original C64's chipset to use HMOS technology. The main benefit of HMOS was that it required less voltage to drive the IC, generating less heat. This enhanced the reliability of the SID and VIC-II. The new chipset was renumbered 85xx to reflect the change to HMOS. In 1987, Commodore released a 64C variant with a redesigned motherboard known as a "short board". The new board used the HMOS chipset, with a new 64-pin PLA chip. The "SuperPLA", as it was called, integrated discrete components and transistor–transistor logic (TTL) chips. In the last revision of the 64C motherboard, the 2114 4-bit-wide color RAM was integrated into the SuperPLA. Power supply The C64 used an external power supply, a linear transformer with multiple taps differing from switch mode (presently used on PC power supplies). It was encased in epoxy resin gel, which discouraged tampering but increased the heat level during use. The design saved space in the computer's case, and allowed international versions to be more easily manufactured. The 1541-II and 1581 disk drives and third-party clones also have external power-supply "bricks", like most peripherals. Commodore power supplies often failed sooner than expected. The computer reportedly had a 30-percent return rate in late 1983, compared to the 5–7 percent rate considered acceptable by the industry; Creative Computing reported four working C64s, out of seven. Malfunctioning power bricks were notorious for damaging the RAM chips. Due to their higher density and single supply (+5V), they had less tolerance for over-voltage. The usually-failing voltage regulator could be replaced by piggybacking a new regulator on the board and fitting a heat sink on top. The original PSU on early-1982 and 1983 machines had a 5-pin connector which could accidentally be plugged into the computer's video output. Commodore later changed the design, omitting the resin gel to reduce costs. The following model, the Commodore 128, used a larger, improved power supply which included a fuse. The power supply for the Commodore REU was similar to that of the Commodore 128, providing an upgrade for customers purchasing the accessory. Specifications Internal hardware Microprocessor CPU: MOS Technology 6510/8500 (the 6510/8500 is a modified 6502 with an integrated 6-bit I/O port) Clock speed: or Video: MOS Technology VIC-II 6567/8562 (NTSC), 6569/8565 (PAL) 16 colors Text mode: 40×25 characters; 256 user-defined chars (8×8 pixels, or 4×8 in multicolor mode); or extended background color; 64 user-defined chars with 4 background colors, 4-bit color RAM defines foreground color Bitmap modes: 320×200 (2 unique colors in each 8×8 pixel block), 160×200 (3 unique colors + 1 common color in each 4×8 block) 8 hardware sprites of 24×21 pixels (12×21 in multicolor mode) Smooth scrolling, raster interrupts Sound: MOS Technology 6581/8580 SID 3-channel synthesizer with programmable ADSR envelope 8 octaves 4 waveforms per audio channel: triangle, sawtooth, variable pulse, noise Oscillator synchronization, ring modulation Programmable filter: high pass, low pass, band pass, notch filter Input/Output: Two 6526 Complex Interface Adapters 16 bit parallel I/O 8 bit serial I/O 24-hours (AM/PM) Time of Day clock (TOD), with programmable alarm clock 16 bit interval timers RAM: 64 KB, of which 38 KB were available for BASIC programs 1024 nybbles color RAM (memory allocated for screen color data storage) Expandable to 320 KB with Commodore 1764 256 KB RAM Expansion Unit (REU); although only 64 KB directly accessible; REU used mostly for the GEOS. REUs of 128 KB and 512 KB, originally designed for the C128, were also available, but required the user to buy a stronger power supply from some third party supplier; with the 1764 this was included. Creative Micro Designs also produced a 2 MB REU for the C64 and C128, called the 1750 XL. The technology actually supported up to 16 MB, but 2 MB was the biggest one officially made. Expansions of up to 16 MB were also possible via the CMD SuperCPU. ROM: ( Commodore BASIC 2.0; KERNAL; character generator, providing two character sets) Input/output (I/O) ports and power supply I/O ports: ROM cartridge expansion slot (44-pin slot for edge connector with 6510 CPU address/data bus lines and control signals, as well as GND and voltage pins; used for program modules and memory expansions, among others) Integrated RF modulator television antenna output via an RCA connector. The used channel could be adjusted from number 36 with the potentiometer to the left. 8-pin DIN connector containing composite video output, separate Y/C outputs and sound input/output. This is a 262° horseshoe version of the plug, rather than the 270° circular version. Early C64 units (with motherboard Assy 326298) use a 5-pin DIN connector that carries composite video and luminance signals, but lacks a chroma signal. Serial bus (proprietary serial version of IEEE-488, 6-pin DIN plug) for CBM printers and disk drives PET-type Commodore Datasette 300 baud tape interface (edge connector with digital cassette motor/read/write/key-sense signals), Ground and +5V DC lines. The cassette motor is controlled by a +5V DC signal from the 6510 CPU. The 9V AC input is transformed into unregulated 6.36V DC which is used to actually power the cassette motor. User port (edge connector with TTL-level signals, for modems and so on; byte-parallel signals which can be used to drive third-party parallel printers, among other things, 17 logic signals, 7 Ground and voltage pins, including 9V AC) 2 × screwless DE9M game controller ports (compatible with Atari 2600 controllers), each supporting five digital inputs and two analog inputs. Available peripherals included digital joysticks, analog paddles, a light pen, the Commodore 1351 mouse, and graphics tablets such as the KoalaPad. Power supply: 5V DC and 9V AC from an external "power brick", attached to a 7-pin female DIN-connector on the computer. The is used to supply power via a charge pump to the SID sound generator chip, provide via a rectifier to the cassette motor, a "0" pulse for every positive half wave to the time-of-day (TOD) input on the CIA chips, and directly to the user-port. Thus, as a minimum, a square wave is required. But a sine wave is preferred. Memory map Note that even if an I/O chip like the VIC-II only uses 64 positions in the memory address space, it will occupy 1,024 addresses because some address bits are left undecoded. Peripherals Manufacturing cost Vertical integration was the key to keeping Commodore 64 production costs low. At the introduction in 1982, the production cost was US$135 and the retail price US$595. In 1985, the retail price went down to US$149 (US$ today) and the production costs were believed to be somewhere between US$35–50 ( Commodore would not confirm this cost figure. Dougherty of the Berkeley Softworks estimated the costs of the Commodore 64 parts based on his experience at Mattel and Imagic. To lower costs, TTL chips were replaced with less expensive custom chips and ways to increase the yields on the sound and graphics chips were found. The video chip 6567 had the ceramic package replaced with plastic but heat dissipation demanded a redesign of the chip and the development of a plastic package that can dissipate heat as well as ceramic. Clones Clones are computers which imitate C64 functions. In mid-2004, after an absence from the marketplace of more than 10 years, PC manufacturer Tulip Computers (owners of the Commodore brand since 1997) announced the C64 Direct-to-TV (C64DTV): a joystick-based TV game based on the C64, with 30 games in its ROM. Designed by Jeri Ellsworth, a self-taught computer designer who had designed the C-One C64 implementation, the C64DTV was similar to other mini-consoles based on the modestly-successful Atari 2600 and Intellivision. The C64DTV was advertised on QVC in the United States for the 2004 holiday season. In 2015, a Commodore 64-compatible motherboard was produced by Individual Computers. Called the C64 Reloaded, it is a redesign of Commodore 64 motherboard revision 250466 with several new features. The motherboard is designed to be placed in an existing, empty C64 or C64C case. Produced in limited quantities, models of this Commodore 64 clone have machined or ZIF sockets in which custom C64 chips are placed. The board contains jumpers to accept revisions of the VIC-II and SID chips and the ability to switch between the PAL and NTSC video systems. It has several innovations, including selection (via the restore key) of KERNAL and character ROMs, built-in reset toggle on the power switch, and an S-Video socket to replace the original TV modulator. The motherboard is powered by a DC-to-DC converter which uses from a mains adapter, rather than the original (and failure-prone) Commodore 64 power-supply brick. Compatible hardware C64 enthusiasts were developing new hardware in 2008, including Ethernet cards, specially-adapted hard disks and flash card interfaces (sd2iec). A-SID, which gives the C-64 a wah-wah effect, was introduced in 2022. Brand reuse The C64 brand was reused in 1998 for the Web.it Internet Computer, a low-powered, Internet-oriented, all-in-one x86 PC running MS-DOS and Windows 3.1. It uses an AMD Élan SC400 SoC with 16 MB of RAM, a 3.5-inch floppy disk drive, 56k modem and PC Card. Despite its Commodore 64 nameplate, the C64 Web.it looks different and is only directly compatible with the original via included emulation software. PC clones branded C64x sold by Commodore USA, a company licensing the Commodore trademark, began shipping in June 2011. The C64x's case resembles the original C64 computer, but – like the Web.it – it is based on x86 architecture and is not compatible with the Commodore 64. Virtual Console Several Commodore 64 games were released on the Nintendo Wii's Virtual Console service in Europe and North America. They were delisted from the service in August 2013. THEC64 and THEC64 Mini THEC64 Mini, an unofficial Linux-based console emulating the Commodore 64, was released in 2018. It was designed and released by British company Retro Games, who licensed the name from Dutch based Commodore Corporation B.V. who own the Commodore marque. The console is a decorative, half-scale Commodore 64 with two USB and one HDMI port, and a mini USB connection to power the system. The console's keyboard is non-functional; the system is controlled by an included THEC64 joystick or a separate USB keyboard. New software ROMs can be loaded into the console, which uses emulator x64 (as part of VICE) to run software and has a built-in graphical operating system. The full-size THEC64 was released in 2019 in Europe and Australia, and was scheduled for release in November 2020 in North America. The console and built-in keyboard are built to scale with the original Commodore 64, including a functional keyboard. Enhancements include VIC-20 emulation, four USB ports, and an upgraded joystick. Neither product has a Commodore trademark. The Commodore key on the original keyboard is replaced with a THEC64 key; Retro Games can call neither product a C64, although the system ROMs are licensed from Cloanto Corporation. The consoles can be switched between carousel mode (to access the built-in game library) and classic mode, in which they operate similarly to a traditional Commodore 64. USB storage can be used to hold disk, cartridge and tape images for use with the machine. Emulators Commodore 64 emulators include the open source VICE, Hoxs64, and CCS64. An iPhone app was also released with a compilation of C64 ports.
Technology
Specific hardware
null
7304
https://en.wikipedia.org/wiki/Coordination%20complex
Coordination complex
A coordination complex is a chemical compound consisting of a central atom or ion, which is usually metallic and is called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or complexing agents. Many metal-containing compounds, especially those that include transition metals (elements like titanium that belong to the periodic table's d-block), are coordination complexes. Nomenclature and terminology Coordination complexes are so pervasive that their structures and reactions are described in many ways, sometimes confusingly. The atom within a ligand that is bonded to the central metal atom or ion is called the donor atom. In a typical complex, a metal ion is bonded to several donor atoms, which can be the same or different. A polydentate (multiple bonded) ligand is a molecule or ion that bonds to the central atom through several of the ligand's atoms; ligands with 2, 3, 4 or even 6 bonds to the central atom are common. These complexes are called chelate complexes; the formation of such complexes is called chelation, complexation, and coordination. The central atom or ion, together with all ligands, comprise the coordination sphere. The central atoms or ion and the donor atoms comprise the first coordination sphere. Coordination refers to the "coordinate covalent bonds" (dipolar bonds) between the ligands and the central atom. Originally, a complex implied a reversible association of molecules, atoms, or ions through such weak chemical bonds. As applied to coordination chemistry, this meaning has evolved. Some metal complexes are formed virtually irreversibly and many are bound together by bonds that are quite strong. The number of donor atoms attached to the central atom or ion is called the coordination number. The most common coordination numbers are 2, 4, and especially 6. A hydrated ion is one kind of a complex ion (or simply a complex), a species formed between a central metal ion and one or more surrounding ligands, molecules or ions that contain at least one lone pair of electrons. If all the ligands are monodentate, then the number of donor atoms equals the number of ligands. For example, the cobalt(II) hexahydrate ion or the hexaaquacobalt(II) ion [Co(H2O)6]2+ is a hydrated-complex ion that consists of six water molecules attached to a metal ion Co. The oxidation state and the coordination number reflect the number of bonds formed between the metal ion and the ligands in the complex ion. However, the coordination number of Pt(en) is 4 (rather than 2) since it has two bidentate ligands, which contain four donor atoms in total. Any donor atom will give a pair of electrons. There are some donor atoms or groups which can offer more than one pair of electrons. Such are called bidentate (offers two pairs of electrons) or polydentate (offers more than two pairs of electrons). In some cases an atom or a group offers a pair of electrons to two similar or different central metal atoms or acceptors—by division of the electron pair—into a three-center two-electron bond. These are called bridging ligands. History Coordination complexes have been known since the beginning of modern chemistry. Early well-known coordination complexes include dyes such as Prussian blue. Their properties were first well understood in the late 1800s, following the 1869 work of Christian Wilhelm Blomstrand. Blomstrand developed what has come to be known as the complex ion chain theory. In considering metal amine complexes, he theorized that the ammonia molecules compensated for the charge of the ion by forming chains of the type [(NH3)X]X+, where X is the coordination number of the metal ion. He compared his theoretical ammonia chains to hydrocarbons of the form (CH2)X. Following this theory, Danish scientist Sophus Mads Jørgensen made improvements to it. In his version of the theory, Jørgensen claimed that when a molecule dissociates in a solution there were two possible outcomes: the ions would bind via the ammonia chains Blomstrand had described or the ions would bind directly to the metal. It was not until 1893 that the most widely accepted version of the theory today was published by Alfred Werner. Werner's work included two important changes to the Blomstrand theory. The first was that Werner described the two possibilities in terms of location in the coordination sphere. He claimed that if the ions were to form a chain, this would occur outside of the coordination sphere while the ions that bound directly to the metal would do so within the coordination sphere. In one of his most important discoveries however Werner disproved the majority of the chain theory. Werner discovered the spatial arrangements of the ligands that were involved in the formation of the complex hexacoordinate cobalt. His theory allows one to understand the difference between a coordinated ligand and a charge balancing ion in a compound, for example the chloride ion in the cobaltammine chlorides and to explain many of the previously inexplicable isomers. In 1911, Werner first resolved the coordination complex hexol into optical isomers, overthrowing the theory that only carbon compounds could possess chirality. Structures The ions or molecules surrounding the central atom are called ligands. Ligands are classified as L or X (or a combination thereof), depending on how many electrons they provide for the bond between ligand and central atom. L ligands provide two electrons from a lone electron pair, resulting in a coordinate covalent bond. X ligands provide one electron, with the central atom providing the other electron, thus forming a regular covalent bond. The ligands are said to be coordinated to the atom. For alkenes, the pi bonds can coordinate to metal atoms. An example is ethylene in the complex (Zeise's salt). Geometry In coordination chemistry, a structure is first described by its coordination number, the number of ligands attached to the metal (more specifically, the number of donor atoms). Usually one can count the ligands attached, but sometimes even the counting can become ambiguous. Coordination numbers are normally between two and nine, but large numbers of ligands are not uncommon for the lanthanides and actinides. The number of bonds depends on the size, charge, and electron configuration of the metal ion and the ligands. Metal ions may have more than one coordination number. Typically the chemistry of transition metal complexes is dominated by interactions between s and p molecular orbitals of the donor-atoms in the ligands and the d orbitals of the metal ions. The s, p, and d orbitals of the metal can accommodate 18 electrons (see 18-Electron rule). The maximum coordination number for a certain metal is thus related to the electronic configuration of the metal ion (to be more specific, the number of empty orbitals) and to the ratio of the size of the ligands and the metal ion. Large metals and small ligands lead to high coordination numbers, e.g. . Small metals with large ligands lead to low coordination numbers, e.g. . Due to their large size, lanthanides, actinides, and early transition metals tend to have high coordination numbers. Most structures follow the points-on-a-sphere pattern (or, as if the central atom were in the middle of a polyhedron where the corners of that shape are the locations of the ligands), where orbital overlap (between ligand and metal orbitals) and ligand-ligand repulsions tend to lead to certain regular geometries. The most observed geometries are listed below, but there are many cases that deviate from a regular geometry, e.g. due to the use of ligands of diverse types (which results in irregular bond lengths; the coordination atoms do not follow a points-on-a-sphere pattern), due to the size of ligands, or due to electronic effects (see, e.g., Jahn–Teller distortion): Linear for two-coordination Trigonal planar for three-coordination Tetrahedral or square planar for four-coordination Trigonal bipyramidal for five-coordination Octahedral for six-coordination Pentagonal bipyramidal for seven-coordination Square antiprismatic for eight-coordination Tricapped trigonal prismatic for nine-coordination The idealized descriptions of 5-, 7-, 8-, and 9- coordination are often indistinct geometrically from alternative structures with slightly differing L-M-L (ligand-metal-ligand) angles, e.g. the difference between square pyramidal and trigonal bipyramidal structures. Square pyramidal for five-coordination Capped octahedral or capped trigonal prismatic for seven-coordination Dodecahedral or bicapped trigonal prismatic for eight-coordination Capped square antiprismatic for nine-coordination To distinguish between the alternative coordinations for five-coordinated complexes, the τ geometry index was invented by Addison et al. This index depends on angles by the coordination center and changes between 0 for the square pyramidal to 1 for trigonal bipyramidal structures, allowing to classify the cases in between. This system was later extended to four-coordinated complexes by Houser et al. and also Okuniewski et al. In systems with low d electron count, due to special electronic effects such as (second-order) Jahn–Teller stabilization, certain geometries (in which the coordination atoms do not follow a points-on-a-sphere pattern) are stabilized relative to the other possibilities, e.g. for some compounds the trigonal prismatic geometry is stabilized relative to octahedral structures for six-coordination. Bent for two-coordination Trigonal pyramidal for three-coordination Trigonal prismatic for six-coordination Isomerism The arrangement of the ligands is fixed for a given complex, but in some cases it is mutable by a reaction that forms another stable isomer. There exist many kinds of isomerism in coordination complexes, just as in many other compounds. Stereoisomerism Stereoisomerism occurs with the same bonds in distinct orientations. Stereoisomerism can be further classified into: Cis–trans isomerism and facial–meridional isomerism Cis–trans isomerism occurs in octahedral and square planar complexes (but not tetrahedral). When two ligands are adjacent they are said to be cis, when opposite each other, trans. When three identical ligands occupy one face of an octahedron, the isomer is said to be facial, or fac. In a fac isomer, any two identical ligands are adjacent or cis to each other. If these three ligands and the metal ion are in one plane, the isomer is said to be meridional, or mer. A mer isomer can be considered as a combination of a trans and a cis, since it contains both trans and cis pairs of identical ligands. Optical isomerism Optical isomerism occurs when a complex is not superimposable with its mirror image. It is so called because the two isomers are each optically active, that is, they rotate the plane of polarized light in opposite directions. In the first molecule shown, the symbol Λ (lambda) is used as a prefix to describe the left-handed propeller twist formed by three bidentate ligands. The second molecule is the mirror image of the first, with the symbol Δ (delta) as a prefix for the right-handed propeller twist. The third and fourth molecules are a similar pair of Λ and Δ isomers, in this case with two bidentate ligands and two identical monodentate ligands. Structural isomerism Structural isomerism occurs when the bonds are themselves different. Four types of structural isomerism are recognized: ionisation isomerism, solvate or hydrate isomerism, linkage isomerism and coordination isomerism. Ionisation isomerism – the isomers give different ions in solution although they have the same composition. This type of isomerism occurs when the counter ion of the complex is also a potential ligand. For example, pentaamminebromocobalt(III) sulphate is red violet and in solution gives a precipitate with barium chloride, confirming the presence of sulphate ion, while pentaamminesulphatecobalt(III) bromide is red and tests negative for sulphate ion in solution, but instead gives a precipitate of AgBr with silver nitrate. Solvate or hydrate isomerism – the isomers have the same composition but differ with respect to the number of molecules of solvent that serve as ligand vs simply occupying sites in the crystal. Examples: is violet colored, is blue-green, and is dark green. See water of crystallization. Linkage isomerism occurs with ligands with more than one possible donor atom, known as ambidentate ligands. For example, nitrite can coordinate through O or N. One pair of nitrite linkage isomers have structures (nitro isomer) and (nitrito isomer). Coordination isomerism occurs when both positive and negative ions of a salt are complex ions and the two isomers differ in the distribution of ligands between the cation and the anion. For example, and . Electronic properties Many of the properties of transition metal complexes are dictated by their electronic structures. The electronic structure can be described by a relatively ionic model that ascribes formal charges to the metals and ligands. This approach is the essence of crystal field theory (CFT). Crystal field theory, introduced by Hans Bethe in 1929, gives a quantum mechanically based attempt at understanding complexes. But crystal field theory treats all interactions in a complex as ionic and assumes that the ligands can be approximated by negative point charges. More sophisticated models embrace covalency, and this approach is described by ligand field theory (LFT) and Molecular orbital theory (MO). Ligand field theory, introduced in 1935 and built from molecular orbital theory, can handle a broader range of complexes and can explain complexes in which the interactions are covalent. The chemical applications of group theory can aid in the understanding of crystal or ligand field theory, by allowing simple, symmetry based solutions to the formal equations. Chemists tend to employ the simplest model required to predict the properties of interest; for this reason, CFT has been a favorite for the discussions when possible. MO and LF theories are more complicated, but provide a more realistic perspective. The electronic configuration of the complexes gives them some important properties: Color of transition metal complexes Transition metal complexes often have spectacular colors caused by electronic transitions by the absorption of light. For this reason they are often applied as pigments. Most transitions that are related to colored metal complexes are either d–d transitions or charge transfer bands. In a d–d transition, an electron in a d orbital on the metal is excited by a photon to another d orbital of higher energy, therefore d–d transitions occur only for partially-filled d-orbital complexes (d1–9). For complexes having d0 or d10 configuration, charge transfer is still possible even though d–d transitions are not. A charge transfer band entails promotion of an electron from a metal-based orbital into an empty ligand-based orbital (metal-to-ligand charge transfer or MLCT). The converse also occurs: excitation of an electron in a ligand-based orbital into an empty metal-based orbital (ligand-to-metal charge transfer or LMCT). These phenomena can be observed with the aid of electronic spectroscopy; also known as UV-Vis. For simple compounds with high symmetry, the d–d transitions can be assigned using Tanabe–Sugano diagrams. These assignments are gaining increased support with computational chemistry. Colors of lanthanide complexes Superficially lanthanide complexes are similar to those of the transition metals in that some are colored. However, for the common Ln3+ ions (Ln = lanthanide) the colors are all pale, and hardly influenced by the nature of the ligand. The colors are due to 4f electron transitions. As the 4f orbitals in lanthanides are "buried" in the xenon core and shielded from the ligand by the 5s and 5p orbitals they are therefore not influenced by the ligands to any great extent leading to a much smaller crystal field splitting than in the transition metals. The absorption spectra of an Ln3+ ion approximates to that of the free ion where the electronic states are described by spin-orbit coupling. This contrasts to the transition metals where the ground state is split by the crystal field. Absorptions for Ln3+ are weak as electric dipole transitions are parity forbidden (Laporte forbidden) but can gain intensity due to the effect of a low-symmetry ligand field or mixing with higher electronic states (e.g. d orbitals). f-f absorption bands are extremely sharp which contrasts with those observed for transition metals which generally have broad bands. This can lead to extremely unusual effects, such as significant color changes under different forms of lighting. Magnetism Metal complexes that have unpaired electrons are paramagnetic. This can be due to an odd number of electrons overall, or to destabilization of electron-pairing. Thus, monomeric Ti(III) species have one "d-electron" and must be (para)magnetic, regardless of the geometry or the nature of the ligands. Ti(II), with two d-electrons, forms some complexes that have two unpaired electrons and others with none. This effect is illustrated by the compounds TiX2[(CH3)2PCH2CH2P(CH3)2]2: when X = Cl, the complex is paramagnetic (high-spin configuration), whereas when X = CH3, it is diamagnetic (low-spin configuration). Ligands provide an important means of adjusting the ground state properties. In bi- and polymetallic complexes, in which the individual centres have an odd number of electrons or that are high-spin, the situation is more complicated. If there is interaction (either direct or through ligand) between the two (or more) metal centres, the electrons may couple (antiferromagnetic coupling, resulting in a diamagnetic compound), or they may enhance each other (ferromagnetic coupling). When there is no interaction, the two (or more) individual metal centers behave as if in two separate molecules. Reactivity Complexes show a variety of possible reactivities: Electron transfers Electron transfer (ET) between metal ions can occur via two distinct mechanisms, inner and outer sphere electron transfers. In an inner sphere reaction, a bridging ligand serves as a conduit for ET. (Degenerate) ligand exchange One important indicator of reactivity is the rate of degenerate exchange of ligands. For example, the rate of interchange of coordinate water in [M(H2O)6]n+ complexes varies over 20 orders of magnitude. Complexes where the ligands are released and rebound rapidly are classified as labile. Such labile complexes can be quite stable thermodynamically. Typical labile metal complexes either have low-charge (Na+), electrons in d-orbitals that are antibonding with respect to the ligands (Zn2+), or lack covalency (Ln3+, where Ln is any lanthanide). The lability of a metal complex also depends on the high-spin vs. low-spin configurations when such is possible. Thus, high-spin Fe(II) and Co(III) form labile complexes, whereas low-spin analogues are inert. Cr(III) can exist only in the low-spin state (quartet), which is inert because of its high formal oxidation state, absence of electrons in orbitals that are M–L antibonding, plus some "ligand field stabilization" associated with the d3 configuration. Associative processes Complexes that have unfilled or half-filled orbitals are often capable of reacting with substrates. Most substrates have a singlet ground-state; that is, they have lone electron pairs (e.g., water, amines, ethers), so these substrates need an empty orbital to be able to react with a metal centre. Some substrates (e.g., molecular oxygen) have a triplet ground state, which results that metals with half-filled orbitals have a tendency to react with such substrates (it must be said that the dioxygen molecule also has lone pairs, so it is also capable to react as a 'normal' Lewis base). If the ligands around the metal are carefully chosen, the metal can aid in (stoichiometric or catalytic) transformations of molecules or be used as a sensor. Classification Metal complexes, also known as coordination compounds, include virtually all metal compounds. The study of "coordination chemistry" is the study of "inorganic chemistry" of all alkali and alkaline earth metals, transition metals, lanthanides, actinides, and metalloids. Thus, coordination chemistry is the chemistry of the majority of the periodic table. Metals and metal ions exist, in the condensed phases at least, only surrounded by ligands. The areas of coordination chemistry can be classified according to the nature of the ligands, in broad terms: Classical (or "Werner Complexes"): Ligands in classical coordination chemistry bind to metals, almost exclusively, via their lone pairs of electrons residing on the main-group atoms of the ligand. Typical ligands are H2O, NH3, Cl−, CN−, en. Some of the simplest members of such complexes are described in metal aquo complexes, metal ammine complexes, Examples: [Co(EDTA)]−, [Co(NH3)6]3+, [Fe(C2O4)3]3- Organometallic chemistry: Ligands are organic (alkenes, alkynes, alkyls) as well as "organic-like" ligands such as phosphines, hydride, and CO. Example: (C5H5)Fe(CO)2CH3 Bioinorganic chemistry: Ligands are those provided by nature, especially including the side chains of amino acids, and many cofactors such as porphyrins. Example: hemoglobin contains heme, a porphyrin complex of iron Example: chlorophyll contains a porphyrin complex of magnesium Many natural ligands are "classical" especially including water. Cluster chemistry: Ligands include all of the above as well as other metal ions or atoms as well. Example Ru3(CO)12 In some cases there are combinations of different fields: Example: [Fe4S4(Scysteinyl)4]2−, in which a cluster is embedded in a biologically active species. Mineralogy, materials science, and solid state chemistry – as they apply to metal ions – are subsets of coordination chemistry in the sense that the metals are surrounded by ligands. In many cases these ligands are oxides or sulfides, but the metals are coordinated nonetheless, and the principles and guidelines discussed below apply. In hydrates, at least some of the ligands are water molecules. It is true that the focus of mineralogy, materials science, and solid state chemistry differs from the usual focus of coordination or inorganic chemistry. The former are concerned primarily with polymeric structures, properties arising from a collective effects of many highly interconnected metals. In contrast, coordination chemistry focuses on reactivity and properties of complexes containing individual metal atoms or small ensembles of metal atoms. Nomenclature of coordination complexes The basic procedure for naming a complex is: When naming a complex ion, the ligands are named before the metal ion. The ligands' names are given in alphabetical order. Numerical prefixes do not affect the order. Multiple occurring monodentate ligands receive a prefix according to the number of occurrences: di-, tri-, tetra-, penta-, or hexa-. Multiple occurring polydentate ligands (e.g., ethylenediamine, oxalate) receive bis-, tris-, tetrakis-, etc. Anions end in o. This replaces the final 'e' when the anion ends with '-ide', '-ate' or '-ite', e.g. chloride becomes chlorido and sulfate becomes sulfato. Formerly, '-ide' was changed to '-o' (e.g. chloro and cyano), but this rule has been modified in the 2005 IUPAC recommendations and the correct forms for these ligands are now chlorido and cyanido. Neutral ligands are given their usual name, with some exceptions: NH3 becomes ammine; H2O becomes aqua or aquo; CO becomes carbonyl; NO becomes nitrosyl. Write the name of the central atom/ion. If the complex is an anion, the central atom's name will end in -ate, and its Latin name will be used if available (except for mercury). The oxidation state of the central atom is to be specified (when it is one of several possible, or zero), and should be written as a Roman numeral (or 0) enclosed in parentheses. Name of the cation should be preceded by the name of anion. (if applicable, as in last example) Examples: [Cd(CN)2(en)2] → dicyanidobis(ethylenediamine)cadmium(II) [CoCl(NH3)5]SO4 → pentaamminechloridocobalt(III) sulfate [Cu(H2O)6] 2+ → hexaaquacopper(II) ion [CuCl5NH3]3− → amminepentachloridocuprate(II) ion K4[Fe(CN)6] → potassium hexacyanidoferrate(II) [NiCl4]2− → tetrachloridonickelate(II) ion (The use of chloro- was removed from IUPAC naming convention) The coordination number of ligands attached to more than one metal (bridging ligands) is indicated by a subscript to the Greek symbol μ placed before the ligand name. Thus the dimer of aluminium trichloride is described by Al2Cl4(μ2-Cl)2. Any anionic group can be electronically stabilized by any cation. An anionic complex can be stabilised by a hydrogen cation, becoming an acidic complex which can dissociate to release the cationic hydrogen. This kind of complex compound has a name with "ic" added after the central metal. For example, H2[Pt(CN)4] has the name tetracyanoplatinic (II) acid. Stability constant The affinity of metal ions for ligands is described by a stability constant, also called the formation constant, and is represented by the symbol Kf. It is the equilibrium constant for its assembly from the constituent metal and ligands, and can be calculated accordingly, as in the following example for a simple case: xM (aq) + yL (aq) zZ (aq) where : x, y, and z are the stoichiometric coefficients of each species. M stands for metal / metal ion , the L for Lewis bases , and finally Z for complex ions. Formation constants vary widely. Large values indicate that the metal has high affinity for the ligand, provided the system is at equilibrium. Sometimes the stability constant will be in a different form known as the constant of destability. This constant is expressed as the inverse of the constant of formation and is denoted as Kd = 1/Kf . This constant represents the reverse reaction for the decomposition of a complex ion into its individual metal and ligand components. When comparing the values for Kd, the larger the value, the more unstable the complex ion is. As a result of these complex ions forming in solutions they also can play a key role in solubility of other compounds. When a complex ion is formed it can alter the concentrations of its components in the solution. For example: Ag + 2NH3 Ag(NH3) AgCl(s) + H2O(l) Ag + Cl If these reactions both occurred in the same reaction vessel, the solubility of the silver chloride would be increased by the presence of NH4OH because formation of the Diammine argentum(I) complex consumes a significant portion of the free silver ions from the solution. By Le Chatelier's principle, this causes the equilibrium reaction for the dissolving of the silver chloride, which has silver ion as a product, to shift to the right. This new solubility can be calculated given the values of Kf and Ksp for the original reactions. The solubility is found essentially by combining the two separate equilibria into one combined equilibrium reaction and this combined reaction is the one that determines the new solubility. So Kc, the new solubility constant, is denoted by: Application of coordination compounds As metals only exist in solution as coordination complexes, it follows then that this class of compounds is useful in a wide variety of ways. Bioinorganic chemistry In bioinorganic chemistry and bioorganometallic chemistry, coordination complexes serve either structural or catalytic functions. An estimated 30% of proteins contain metal ions. Examples include the intensely colored vitamin B12, the heme group in hemoglobin, the cytochromes, the chlorin group in chlorophyll, and carboxypeptidase, a hydrolytic enzyme important in digestion. Another complex ion enzyme is catalase, which decomposes the cell's waste hydrogen peroxide. Synthetic coordination compounds are also used to bind to proteins and especially nucleic acids (e.g. anticancer drug cisplatin). Industry Homogeneous catalysis is a major application of coordination compounds for the production of organic substances. Processes include hydrogenation, hydroformylation, oxidation. In one example, a combination of titanium trichloride and triethylaluminium gives rise to Ziegler–Natta catalysts, used for the polymerization of ethylene and propylene to give polymers of great commercial importance as fibers, films, and plastics. Nickel, cobalt, and copper can be extracted using hydrometallurgical processes involving complex ions. They are extracted from their ores as ammine complexes. Metals can also be separated using the selective precipitation and solubility of complex ions. Cyanide is used chiefly for extraction of gold and silver from their ores. Phthalocyanine complexes are an important class of pigments. Analysis At one time, coordination compounds were used to identify the presence of metals in a sample. Qualitative inorganic analysis has largely been superseded by instrumental methods of analysis such as atomic absorption spectroscopy (AAS), inductively coupled plasma atomic emission spectroscopy (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS).
Physical sciences
Bond structure
Chemistry
7316
https://en.wikipedia.org/wiki/Hypothetical%20types%20of%20biochemistry
Hypothetical types of biochemistry
Several forms of biochemistry are agreed to be scientifically viable but are not proven to exist at this time. The kinds of living organisms currently known on Earth all use carbon compounds for basic structural and metabolic functions, water as a solvent, and DNA or RNA to define and control their form. If life exists on other planets or moons it may be chemically similar, though it is also possible that there are organisms with quite different chemistries for instance, involving other classes of carbon compounds, compounds of another element, or another solvent in place of water. The possibility of life-forms being based on "alternative" biochemistries is the topic of an ongoing scientific discussion, informed by what is known about extraterrestrial environments and about the chemical behaviour of various elements and compounds. It is of interest in synthetic biology and is also a common subject in science fiction. The element silicon has been much discussed as a hypothetical alternative to carbon. Silicon is in the same group as carbon on the periodic table and, like carbon, it is tetravalent. Hypothetical alternatives to water include ammonia, which, like water, is a polar molecule, and cosmically abundant; and non-polar hydrocarbon solvents such as methane and ethane, which are known to exist in liquid form on the surface of Titan. Overview of hypothetical types of biochemistry Shadow biosphere A shadow biosphere is a hypothetical microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. Although life on Earth is relatively well-studied, the shadow biosphere may still remain unnoticed because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms. Alternative-chirality biomolecules Perhaps the least unusual alternative biochemistry would be one with differing chirality of its biomolecules. In known Earth-based life, amino acids are almost universally of the form and sugars are of the form. Molecules using amino acids or sugars may be possible; molecules of such a chirality, however, would be incompatible with organisms using the opposing chirality molecules. Amino acids whose chirality is opposite to the norm are found on Earth, and these substances are generally thought to result from decay of organisms of normal chirality. However, physicist Paul Davies speculates that some of them might be products of "anti-chiral" life. It is questionable, however, whether such a biochemistry would be truly alien. Although it would certainly be an alternative stereochemistry, molecules that are overwhelmingly found in one enantiomer throughout the vast majority of organisms can nonetheless often be found in another enantiomer in different (often basal) organisms such as in comparisons between members of Archaea and other domains, making it an open topic whether an alternative stereochemistry is truly novel. Non-carbon-based biochemistries On Earth, all known living things have a carbon-based structure and system. Scientists have speculated about the pros and cons of using elements other than carbon to form the molecular structures necessary for life, but no one has proposed a theory employing such atoms to form all the necessary structures. However, as Carl Sagan argued, it is very difficult to be certain whether a statement that applies to all life on Earth will turn out to apply to all life throughout the universe. Sagan used the term "carbon chauvinism" for such an assumption. He regarded silicon and germanium as conceivable alternatives to carbon (other plausible elements include but are not limited to palladium and titanium); but, on the other hand, he noted that carbon does seem more chemically versatile and is more abundant in the cosmos. Norman Horowitz devised the experiments to determine whether life might exist on Mars that were carried out by the Viking Lander of 1976, the first U.S. mission to successfully land a probe on the surface of Mars. Horowitz argued that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. He considered that there was only a remote possibility that non-carbon life forms could exist with genetic information systems capable of self-replication and the ability to evolve and adapt. Silicon biochemistry The silicon atom has been much discussed as the basis for an alternative biochemical system, because silicon has many chemical similarities to carbon and is in the same group of the periodic table. Like carbon, silicon can create molecules that are sufficiently large to carry biological information. However, silicon has several drawbacks as a carbon alternative. Carbon is ten times more cosmically abundant than silicon, and its chemistry appears naturally more complex. By 1998, astronomers had identified 84 carbon-containing molecules in the interstellar medium, but only 8 containing silicon, of which half also included carbon. Even though Earth and other terrestrial planets are exceptionally silicon-rich and carbon-poor (silicon is roughly 925 times more abundant in Earth's crust than carbon), terrestrial life bases itself on carbon. It may eschew silicon because silicon compounds are less varied, unstable in the presence of water, or block the flow of heat. Relative to carbon, silicon has a much larger atomic radius, and forms much weaker covalent bonds to atoms — except oxygen and fluorine, with which it forms very strong bonds. Almost no multiple bonds to silicon are stable, although silicon does exhibit varied coordination number. Silanes, silicon analogues to the alkanes, react rapidly with water, and long-chain silanes spontaneously decompose. Consequently, most terrestrial silicon is "locked up" in silica, and not a wide variety of biogenic precursors. Silicones, which alternate between silicon and oxygen atoms, are much more stable than silanes, and may even be more stable than the equivalent hydrocarbons in sulfuric acid-rich extraterrestrial environments. Alternatively, the weak bonds in silicon compounds may help maintain a rapid pace of life at cryogenic temperatures. Polysilanols, the silicon homologues to sugars, are among the few compounds soluble in liquid nitrogen. All known silicon macromolecules are artificial polymers, and so "monotonous compared with the combinatorial universe of organic macromolecules". Even so, some Earth life uses biogenic silica: diatoms' silicate skeletons. A. G. Cairns-Smith hypothesized that silicate minerals in water played a crucial role in abiogenesis, in that biogenic carbon compounds formed around their crystal structures. Although not observed in nature, carbon–silicon bonds have been added to biochemistry under directed evolution (artificial selection): a cytochrome c protein from Rhodothermus marinus has been engineered to catalyze new carbon–silicon bonds between hydrosilanes and diazo compounds. Other exotic element-based biochemistries Boranes are dangerously explosive in Earth's atmosphere, but would be more stable in a reducing atmosphere. However, boron's low cosmic abundance makes it less likely as a base for life than carbon. Various metals, together with oxygen, can form very complex and thermally stable structures rivaling those of organic compounds; the heteropoly acids are one such family. Some metal oxides are also similar to carbon in their ability to form both nanotube structures and diamond-like crystals (such as cubic zirconia). Titanium, aluminium, magnesium, and iron are all more abundant in the Earth's crust than carbon. Metal-oxide-based life could therefore be a possibility under certain conditions, including those (such as high temperatures) at which carbon-based life would be unlikely. The Cronin group at Glasgow University reported self-assembly of tungsten polyoxometalates into cell-like spheres. By modifying their metal oxide content, the spheres can acquire holes that act as porous membrane, selectively allowing chemicals in and out of the sphere according to size. Sulfur is also able to form long-chain molecules, but suffers from the same high-reactivity problems as phosphorus and silanes. The biological use of sulfur as an alternative to carbon is purely hypothetical, especially because sulfur usually forms only linear chains rather than branched ones. (The biological use of sulfur as an electron acceptor is widespread and can be traced back 3.5 billion years on Earth, thus predating the use of molecular oxygen. Sulfur-reducing bacteria can utilize elemental sulfur instead of oxygen, reducing sulfur to hydrogen sulfide.) Arsenic as an alternative to phosphorus Arsenic, which is chemically similar to phosphorus, while poisonous for most life forms on Earth, is incorporated into the biochemistry of some organisms. Some marine algae incorporate arsenic into complex organic molecules such as arsenosugars and arsenobetaines. Fungi and bacteria can produce volatile methylated arsenic compounds. Arsenate reduction and arsenite oxidation have been observed in microbes (Chrysiogenes arsenatis). Additionally, some prokaryotes can use arsenate as a terminal electron acceptor during anaerobic growth and some can utilize arsenite as an electron donor to generate energy. It has been speculated that the earliest life forms on Earth may have used arsenic biochemistry in place of phosphorus in the structure of their DNA. A common objection to this scenario is that arsenate esters are so much less stable to hydrolysis than corresponding phosphate esters that arsenic is poorly suited for this function. The authors of a 2010 geomicrobiology study, supported in part by NASA, have postulated that a bacterium, named GFAJ-1, collected in the sediments of Mono Lake in eastern California, can employ such 'arsenic DNA' when cultured without phosphorus. They proposed that the bacterium may employ high levels of poly-β-hydroxybutyrate or other means to reduce the effective concentration of water and stabilize its arsenate esters. This claim was heavily criticized almost immediately after publication for the perceived lack of appropriate controls. Science writer Carl Zimmer contacted several scientists for an assessment: "I reached out to a dozen experts ... Almost unanimously, they think the NASA scientists have failed to make their case". Other authors were unable to reproduce their results and showed that the study had issues with phosphate contamination, suggesting that the low amounts present could sustain extremophile lifeforms. Alternatively, it was suggested that GFAJ-1 cells grow by recycling phosphate from degraded ribosomes, rather than by replacing it with arsenate. Non-water solvents In addition to carbon compounds, all currently known terrestrial life also requires water as a solvent. This has led to discussions about whether water is the only liquid capable of filling that role. The idea that an extraterrestrial life-form might be based on a solvent other than water has been taken seriously in recent scientific literature by the biochemist Steven Benner, and by the astrobiological committee chaired by John A. Baross. Solvents discussed by the Baross committee include ammonia, sulfuric acid, formamide, hydrocarbons, and (at temperatures much lower than Earth's) liquid nitrogen, or hydrogen in the form of a supercritical fluid. Water as a solvent limits the forms biochemistry can take. For example, Steven Benner, proposes the polyelectrolyte theory of the gene that claims that for a genetic biopolymer such as, DNA, to function in water, it requires repeated ionic charges. If water is not required for life, these limits on genetic biopolymers are removed. Carl Sagan once described himself as both a carbon chauvinist and a water chauvinist; however, on another occasion he said that he was a carbon chauvinist but "not that much of a water chauvinist". He speculated on hydrocarbons, hydrofluoric acid, and ammonia as possible alternatives to water. Some of the properties of water that are important for life processes include: A complexity which leads to a large number of permutations of possible reaction paths including acid–base chemistry, H+ cations, OH− anions, hydrogen bonding, van der Waals bonding, dipole–dipole and other polar interactions, aqueous solvent cages, and hydrolysis. This complexity offers a large number of pathways for evolution to produce life, many other solvents have dramatically fewer possible reactions, which severely limits evolution. Thermodynamic stability: the free energy of formation of liquid water is low enough (−237.24 kJ/mol) that water undergoes few reactions. Other solvents are highly reactive, particularly with oxygen. Water does not combust in oxygen because it is already the combustion product of hydrogen with oxygen. Most alternative solvents are not stable in an oxygen-rich atmosphere, so it is highly unlikely that those liquids could support aerobic life. A large temperature range over which it is liquid. High solubility of oxygen and carbon dioxide at room temperature supporting the evolution of aerobic aquatic plant and animal life. A high heat capacity (leading to higher environmental temperature stability). Water is a room-temperature liquid leading to a large population of quantum transition states required to overcome reaction barriers. Cryogenic liquids (such as liquid methane) have exponentially lower transition state populations which are needed for life based on chemical reactions. This leads to chemical reaction rates which may be so slow as to preclude the development of any life based on chemical reactions. Spectroscopic transparency allowing solar radiation to penetrate several meters into the liquid (or solid), greatly aiding the evolution of aquatic life. A large heat of vaporization leading to stable lakes and oceans. The ability to dissolve a wide variety of compounds. The solid (ice) has lower density than the liquid, so ice floats on the liquid. This is why bodies of water freeze over but do not freeze solid (from the bottom up). If ice were denser than liquid water (as is true for nearly all other compounds), then large bodies of liquid would slowly freeze solid, which would not be conducive to the formation of life. Water as a compound is cosmically abundant, although much of it is in the form of vapor or ice. Subsurface liquid water is considered likely or possible on several of the outer moons: Enceladus (where geysers have been observed), Europa, Titan, and Ganymede. Earth and Titan are the only worlds currently known to have stable bodies of liquid on their surfaces. Not all properties of water are necessarily advantageous for life, however. For instance, water ice has a high albedo, meaning that it reflects a significant quantity of light and heat from the Sun. During ice ages, as reflective ice builds up over the surface of the water, the effects of global cooling are increased. There are some properties that make certain compounds and elements much more favorable than others as solvents in a successful biosphere. The solvent must be able to exist in liquid equilibrium over a range of temperatures the planetary object would normally encounter. Because boiling points vary with the pressure, the question tends not to be does the prospective solvent remain liquid, but at what pressure. For example, hydrogen cyanide has a narrow liquid-phase temperature range at 1 atmosphere, but in an atmosphere with the pressure of Venus, with of pressure, it can indeed exist in liquid form over a wide temperature range. Ammonia The ammonia molecule (NH3), like the water molecule, is abundant in the universe, being a compound of hydrogen (the simplest and most common element) with another very common element, nitrogen. The possible role of liquid ammonia as an alternative solvent for life is an idea that goes back at least to 1954, when J. B. S. Haldane raised the topic at a symposium about life's origin. Numerous chemical reactions are possible in an ammonia solution, and liquid ammonia has chemical similarities with water. Ammonia can dissolve most organic molecules at least as well as water does and, in addition, it is capable of dissolving many elemental metals. Haldane made the point that various common water-related organic compounds have ammonia-related analogs; for instance the ammonia-related amine group (−NH2) is analogous to the water-related hydroxyl group (−OH). Ammonia, like water, can either accept or donate an H+ ion. When ammonia accepts an H+, it forms the ammonium cation (NH4+), analogous to hydronium (H3O+). When it donates an H+ ion, it forms the amide anion (NH2−), analogous to the hydroxide anion (OH−). Compared to water, however, ammonia is more inclined to accept an H+ ion, and less inclined to donate one; it is a stronger nucleophile. Ammonia added to water functions as Arrhenius base: it increases the concentration of the anion hydroxide. Conversely, using a solvent system definition of acidity and basicity, water added to liquid ammonia functions as an acid, because it increases the concentration of the cation ammonium. The carbonyl group (C=O), which is much used in terrestrial biochemistry, would not be stable in ammonia solution, but the analogous imine group (C=NH) could be used instead. However, ammonia has some problems as a basis for life. The hydrogen bonds between ammonia molecules are weaker than those in water, causing ammonia's heat of vaporization to be half that of water, its surface tension to be a third, and reducing its ability to concentrate non-polar molecules through a hydrophobic effect. Gerald Feinberg and Robert Shapiro have questioned whether ammonia could hold prebiotic molecules together well enough to allow the emergence of a self-reproducing system. Ammonia is also flammable in oxygen and could not exist sustainably in an environment suitable for aerobic metabolism. A biosphere based on ammonia would likely exist at temperatures or air pressures that are extremely unusual in relation to life on Earth. Life on Earth usually exists between the melting point and boiling point of water, at a pressure designated as normal pressure, between . When also held to normal pressure, ammonia's melting and boiling points are and respectively. Because chemical reactions generally proceed more slowly at lower temperatures, ammonia-based life existing in this set of conditions might metabolize more slowly and evolve more slowly than life on Earth. On the other hand, lower temperatures could also enable living systems to use chemical species that would be too unstable at Earth temperatures to be useful. A set of conditions where ammonia is liquid at Earth-like temperatures would involve it being at a much higher pressure. For example, at 60 atm ammonia melts at and boils at . Ammonia and ammonia–water mixtures remain liquid at temperatures far below the freezing point of pure water, so such biochemistries might be well suited to planets and moons orbiting outside the water-based habitability zone. Such conditions could exist, for example, under the surface of Saturn's largest moon Titan. Methane and other hydrocarbons Methane (CH4) is a simple hydrocarbon: that is, a compound of two of the most common elements in the cosmos: hydrogen and carbon. It has a cosmic abundance comparable with ammonia. Hydrocarbons could act as a solvent over a wide range of temperatures, but would lack polarity. Isaac Asimov, the biochemist and science fiction writer, suggested in 1981 that poly-lipids could form a substitute for proteins in a non-polar solvent such as methane. Lakes composed of a mixture of hydrocarbons, including methane and ethane, have been detected on the surface of Titan by the Cassini spacecraft. There is debate about the effectiveness of methane and other hydrocarbons as a solvent for life compared to water or ammonia. Water is a stronger solvent than the hydrocarbons, enabling easier transport of substances in a cell. However, water is also more chemically reactive and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the threat of its biomolecules being destroyed in this way. Also, the water molecule's tendency to form strong hydrogen bonds can interfere with internal hydrogen bonding in complex organic molecules. Life with a hydrocarbon solvent could make more use of hydrogen bonds within its biomolecules. Moreover, the strength of hydrogen bonds within biomolecules would be appropriate to a low-temperature biochemistry. Astrobiologist Chris McKay has argued, on thermodynamic grounds, that if life does exist on Titan's surface, using hydrocarbons as a solvent, it is likely also to use the more complex hydrocarbons as an energy source by reacting them with hydrogen, reducing ethane and acetylene to methane. Possible evidence for this form of life on Titan was identified in 2010 by Darrell Strobel of Johns Hopkins University; a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward diffusion at a rate of roughly 1025 molecules per second and disappearance of hydrogen near Titan's surface. As Strobel noted, his findings were in line with the effects Chris McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by Chris McKay as consistent with the hypothesis of organisms reducing acetylene to methane. While restating the biological hypothesis, McKay cautioned that other explanations for the hydrogen and acetylene findings are to be considered more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a non-living surface catalyst enabling acetylene to react with hydrogen), or flaws in the current models of material flow. He noted that even a non-biological catalyst effective at 95 K would in itself be a startling discovery. Azotosome A hypothetical cell membrane termed an azotosome, capable of functioning in liquid methane in Titan conditions was computer-modeled in an article published in February 2015. Composed of acrylonitrile, a small molecule containing carbon, hydrogen, and nitrogen, it is predicted to have stability and flexibility in liquid methane comparable to that of a phospholipid bilayer (the type of cell membrane possessed by all life on Earth) in liquid water. An analysis of data obtained using the Atacama Large Millimeter / submillimeter Array (ALMA), completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere. Later studies questioned whether acrylonitrile would be able to self-assemble into azotozomes. Hydrogen fluoride Hydrogen fluoride (HF), like water, is a polar molecule, and due to its polarity it can dissolve many ionic compounds. At atmospheric pressure, its melting point is , and its boiling point is ; the difference between the two is a little more than 100 K. HF also makes hydrogen bonds with its neighbor molecules, as do water and ammonia. It has been considered as a possible solvent for life by scientists such as Peter Sneath and Carl Sagan. HF is dangerous to the systems of molecules that Earth-life is made of, but certain other organic compounds, such as paraffin waxes, are stable with it. Like water and ammonia, liquid hydrogen fluoride supports an acid–base chemistry. Using a solvent system definition of acidity and basicity, nitric acid functions as a base when it is added to liquid HF. However, hydrogen fluoride is cosmically rare, unlike water, ammonia, and methane. Hydrogen sulfide Hydrogen sulfide is the closest chemical analog to water, but is less polar and is a weaker inorganic solvent. Hydrogen sulfide is quite plentiful on Jupiter's moon Io and may be in liquid form a short distance below the surface; astrobiologist Dirk Schulze-Makuch has suggested it as a possible solvent for life there. On a planet with hydrogen sulfide oceans, the source of the hydrogen sulfide could come from volcanoes, in which case it could be mixed in with a bit of hydrogen fluoride, which could help dissolve minerals. Hydrogen sulfide life might use a mixture of carbon monoxide and carbon dioxide as their carbon source. They might produce and live on sulfur monoxide, which is analogous to oxygen (O2). Hydrogen sulfide, like hydrogen cyanide and ammonia, suffers from the small temperature range where it is liquid, though that, like that of hydrogen cyanide and ammonia, increases with increasing pressure. Silicon dioxide and silicates Silicon dioxide, also known as silica and quartz, is very abundant in the universe and has a large temperature range where it is liquid. However, its melting point is , so it would be impossible to make organic compounds in that temperature, because all of them would decompose. Silicates are similar to silicon dioxide and some have lower melting points than silica. Feinberg and Shapiro have suggested that molten silicate rock could serve as a liquid medium for organisms with a chemistry based on silicon, oxygen, and other elements such as aluminium. Other solvents or cosolvents Other solvents sometimes proposed: Supercritical fluids: supercritical carbon dioxide and supercritical hydrogen. Simple hydrogen compounds: hydrogen chloride. More complex compounds: sulfuric acid, formamide, methanol. Very-low-temperature fluids: liquid nitrogen and hydrogen. High-temperature liquids: sodium chloride. Sulfuric acid in liquid form is strongly polar. It remains liquid at higher temperatures than water, its liquid range being 10 °C to 337 °C at a pressure of 1 atm, although above 300 °C it slowly decomposes. Sulfuric acid is known to be abundant in the clouds of Venus, in the form of aerosol droplets. In a biochemistry that used sulfuric acid as a solvent, the alkene group (C=C), with two carbon atoms joined by a double bond, could function analogously to the carbonyl group (C=O) in water-based biochemistry. A proposal has been made that life on Mars may exist and be using a mixture of water and hydrogen peroxide as its solvent. A 61.2% (by mass) mix of water and hydrogen peroxide has a freezing point of −56.5 °C and tends to super-cool rather than crystallize. It is also hygroscopic, an advantage in a water-scarce environment. Supercritical carbon dioxide has been proposed as a candidate for alternative biochemistry due to its ability to selectively dissolve organic compounds and assist the functioning of enzymes and because "super-Earth"- or "super-Venus"-type planets with dense high-pressure atmospheres may be common. Other speculations Non-green photosynthesizers Physicists have noted that, although photosynthesis on Earth generally involves green plants, a variety of other-colored plants could also support photosynthesis, essential for most life on Earth, and that other colors might be preferred in places that receive a different mix of stellar radiation than Earth. These studies indicate that blue plants would be unlikely; however yellow or red plants may be relatively common. Variable environments Many Earth plants and animals undergo major biochemical changes during their life cycles as a response to changing environmental conditions, for example, by having a spore or hibernation state that can be sustained for years or even millennia between more active life stages. Thus, it would be biochemically possible to sustain life in environments that are only periodically consistent with life as we know it. For example, frogs in cold climates can survive for extended periods of time with most of their body water in a frozen state, whereas desert frogs in Australia can become inactive and dehydrate in dry periods, losing up to 75% of their fluids, yet return to life by rapidly rehydrating in wet periods. Either type of frog would appear biochemically inactive (i.e. not living) during dormant periods to anyone lacking a sensitive means of detecting low levels of metabolism. Alanine world and hypothetical alternatives The genetic code may have evolved during the transition from the RNA world to a protein world. The Alanine World Hypothesis postulates that the evolution of the genetic code (the so-called GC phase) started with only four basic amino acids: alanine, glycine, proline and ornithine (now arginine). The evolution of the genetic code ended with 20 proteinogenic amino acids. From a chemical point of view, most of them are Alanine-derivatives particularly suitable for the construction of α-helices and β-sheets basic secondary structural elements of modern proteins. Direct evidence of this is an experimental procedure in molecular biology known as alanine scanning. A hypothetical "Proline World" would create a possible alternative life with the genetic code based on the proline chemical scaffold as the protein backbone. Similarly, a "Glycine World" and "Ornithine World" are also conceivable, but nature has chosen none of them. Evolution of life with Proline, Glycine, or Ornithine as the basic structure for protein-like polymers (foldamers) would lead to parallel biological worlds. They would have morphologically radically different body plans and genetics from the living organisms of the known biosphere. Nonplanetary life Dusty plasma-based In 2007, Vadim N. Tsytovich and colleagues proposed that lifelike behaviors could be exhibited by dust particles suspended in a plasma, under conditions that might exist in space. Computer models showed that, when the dust became charged, the particles could self-organize into microscopic helical structures, and the authors offer "a rough sketch of a possible model of...helical grain structure reproduction". Cosmic necklace-based In 2020, Luis A. Anchordoqu and Eugene M. Chudnovsky of the City University of New York hypothesized that cosmic necklace-based life composed of magnetic monopoles connected by cosmic strings could evolve inside stars. This would be achieved by a stretching of cosmic strings due to the star's intense gravity, thus allowing it to take on more complex forms and potentially form structures similar to the RNA and DNA structures found within carbon-based life. As such, it is theoretically possible that such beings could eventually become intelligent and construct a civilization using the power generated by the star's nuclear fusion. Because such use would use up part of the star's energy output, the luminosity would also fall. For this reason, it is thought that such life might exist inside stars observed to be cooling faster or dimmer than current cosmological models predict. Life on a neutron star Frank Drake suggested in 1973 that intelligent life could inhabit neutron stars. Physical models in 1973 implied that Drake's creatures would be microscopic. Scientists who have published on this topic Scientists who have considered possible alternatives to carbon-water biochemistry include: J. B. S. Haldane (1892–1964), a geneticist noted for his work on abiogenesis. V. Axel Firsoff (1910–1981), British astronomer. Isaac Asimov (1920–1992), biochemist and science fiction writer. Fred Hoyle (1915–2001), astronomer and science fiction writer. Norman Horowitz (1915–2005), Caltech geneticist who devised the first experiments carried out to detect life on Mars. George C. Pimentel (1922–1989), American chemist, University of California, Berkeley. Peter Sneath (1923–2011), microbiologist, author of the book Planets and Life. Gerald Feinberg (1933–1992), physicist and Robert Shapiro (1935–2011), chemist, co-authors of the book Life Beyond Earth. Carl Sagan (1934–1996), astronomer, science popularizer, and SETI proponent. Jonathan Lunine (born 1959), American planetary scientist and physicist. Robert Freitas (born 1952), specialist in nano-technology and nano-medicine. John Baross (born 1940), oceanographer and astrobiologist, who chaired a committee of scientists under the United States National Research Council that published a report on life's limiting conditions in 2007.
Physical sciences
Astronomy basics
Astronomy
7329
https://en.wikipedia.org/wiki/Cyprinidae
Cyprinidae
Cyprinidae is a family of freshwater fish commonly called the carp or minnow family, including the carps, the true minnows, and their relatives the barbs and barbels, among others. Cyprinidae is the largest and most diverse fish family, and the largest vertebrate animal family overall, with about 3,000 species; only 1,270 of these remain extant, divided into about 200 valid genera. Cyprinids range from about in size to the giant barb (Catlocarpio siamensis). By genus and species count, the family makes up more than two-thirds of the ostariophysian order Cypriniformes. The family name is derived from the Greek word ( 'carp'). Biology and ecology Cyprinids are stomachless, or agastric, fish with toothless jaws. Even so, food can be effectively chewed by the gill rakers of the specialized last gill bow. These pharyngeal teeth allow the fish to make chewing motions against a chewing plate formed by a bony process of the skull. The pharyngeal teeth are unique to each species and are used to identify species. Strong pharyngeal teeth allow fish such as the common carp and ide to eat hard baits such as snails and bivalves. Hearing is a well-developed sense in the cyprinids since they have the Weberian organ, three specialized vertebral processes that transfer motion of the gas bladder to the inner ear. The vertebral processes of the Weberian organ also permit a cyprinid to detect changes in motion of the gas bladder due to atmospheric conditions or depth changes. The cyprinids are considered physostomes because the pneumatic duct is retained in adult stages and the fish are able to gulp air to fill the gas bladder, or they can dispose of excess gas to the gut. Cyprinids are native to North America, Africa, and Eurasia. The largest known cyprinid is the giant barb (Catlocarpio siamensis), which may grow up to in length and in weight. Other very large species that can surpass are the golden mahseer (Tor putitora) and mangar (Luciobarbus esocinus). The largest North American species is the Colorado pikeminnow (Ptychocheilus lucius), which can reach up to in length. Conversely, many species are smaller than . The smallest known fish is Paedocypris progenetica, reaching at the longest. All fish in this family are egg-layers and most do not guard their eggs; however, a few species build nests and/or guard the eggs. The bitterlings of subfamily Acheilognathinae are notable for depositing their eggs in bivalve molluscs, where the young develop until able to fend for themselves. Cyprinids contain the first and only known example of androgenesis in a vertebrate, in the Squalius alburnoides allopolyploid complex. Most cyprinids feed mainly on invertebrates and vegetation, probably due to the lack of teeth and stomach; however, some species, like the asp, are predators that specialize in fish. Many species, such as the ide and the common rudd, prey on small fish when individuals become large enough. Even small species, such as the moderlieschen, are opportunistic predators that will eat larvae of the common frog in artificial circumstances. Some cyprinids, such as the grass carp, are specialized herbivores; others, such as the common nase, eat algae and biofilms, while others, such as the black carp, specialize in snails, and some, such as the silver carp, are specialized filter feeders. For this reason, cyprinids are often introduced as a management tool to control various factors in the aquatic environment, such as aquatic vegetation and diseases transmitted by snails. Unlike most fish species, cyprinids generally increase in abundance in eutrophic lakes. Here, they contribute towards positive feedback as they are efficient at eating the zooplankton that would otherwise graze on the algae, reducing its abundance. Relationship with humans Food Cyprinids are highly important food fish; they are fished and farmed across Eurasia. In land-locked countries in particular, cyprinids are often the major species of fish eaten because they make the largest part of biomass in most water types except for fast-flowing rivers. In Eastern Europe, they are often prepared with traditional methods such as drying and salting. The prevalence of inexpensive frozen fish products made this less important now than it was in earlier times. Nonetheless, in certain places, they remain popular for food, as well as recreational fishing, for ornamental use, and have been deliberately stocked in ponds and lakes for centuries for this reason. Sport Cyprinids are popular for angling especially for match fishing (due to their dominance in biomass and numbers) and fishing for common carp because of its size and strength. As pest control Several cyprinids have been introduced to waters outside their natural ranges to provide food, sport, or biological control for some pest species. The common carp (Cyprinus carpio) and the grass carp (Ctenopharyngodon idella) are the most important of these, for example in Florida. As a pest species Carp in particular can stir up sediment, reducing the clarity of the water and making plant growth difficult. In America and Australia, such as the Asian carp in the Mississippi Basin, they have become invasive species that compete with native fishes or disrupt the environment. Cyprinus carpio is a major pest species in Australia impacting freshwater environments, amenity, and the agricultural economy, devastating biodiversity by decimating native fish populations where they first became established as a major pest in the wild in the 1960s. In the major river system of eastern Australia, the Murray-Darling Basin, they constitute 80–90 per cent of fish biomass. In 2016 the federal government announced A$15.2 million to fund the National Carp Control Plan to investigate using Cyprinid herpesvirus 3 (carp virus) as a biological control agent while minimising impacts on industry and environment should a carp virus release go ahead. Despite initial, favourable assessment, in 2020 this plan was found to be unlikely to work due to the high fecundity of the fish. Aquarium fish Numerous cyprinids have become popular and important within the aquarium and fishpond hobbies, most famously the goldfish, which was bred in China from the Prussian carp (Carassius (auratus) gibelio). First imported into Europe around 1728, it was originally much-fancied by the Chinese nobility as early as 1150AD and, after it arrived there in 1502, also in Japan. In addition to the goldfish, the common carp was bred in Japan into the colorful ornamental variety known as koi — or more accurately , as simply means "common carp" in Japanese — from the 18th century until today. Other popular aquarium cyprinids include danionins, rasborines and true barbs. Larger species are bred by the thousands in outdoor ponds, particularly in Southeast Asia, and trade in these aquarium fishes is of considerable commercial importance. The small rasborines and danionines are perhaps only rivalled by characids (tetras) and poecilid livebearers in their popularity for community aquaria. Some of the most popular cyprinids among aquarists, other than goldfish and koi, include the cherry barb, Harlequin rasbora, pearl danios, rainbow sharks, tiger barbs, and the White Cloud Mountain minnow. One particular species of these small and undemanding danionines is the zebrafish (Danio rerio). It has become the standard model species for studying developmental genetics of vertebrates, in particular fish. Threatened families Habitat destruction and other causes have reduced the wild stocks of several cyprinids to dangerously low levels; some are already entirely extinct. In particular, the cyprinids of the subfamily Leuciscinae from southwestern North America have been severely affected by pollution and unsustainable water use in the early to mid-20th century. The majority of globally extinct cypriniform species in fact belong to the leuciscinid cyprinids from the southwestern United States and northern Mexico. Systematics The massive diversity of cyprinids has so far made it difficult to resolve their phylogeny in sufficient detail to make assignment to subfamilies more than tentative in many cases. Some distinct lineages obviously exist – for example, the Cultrinae and Leuciscinae, regardless of their exact delimitation, are rather close relatives and stand apart from Cyprininaebut the overall systematics and taxonomy of the Cyprinidae remain a subject of considerable debate. A large number of genera are incertae sedis, too equivocal in their traits and/or too little-studied to permit assignment to a particular subfamily with any certainty. Part of the solution seems that the delicate rasborines are the core group, consisting of minor lineages that have not shifted far from their evolutionary niche, or have coevolved for millions of years. These are among the most basal lineages of living cyprinids. Other "rasborines" are apparently distributed across the diverse lineages of the family. The validity and circumscription of proposed subfamilies like the Labeoninae or Squaliobarbinae also remain doubtful, although the latter do appear to correspond to a distinct lineage. The sometimes-seen grouping of the large-headed carps (Hypophthalmichthyinae) with Xenocypris, though, seems quite in error. More likely, the latter are part of the Cultrinae. The entirely paraphyletic "Barbinae" and the disputed Labeoninae might be better treated as part of the Cyprininae, forming a close-knit group whose internal relationships are still little known. The small African "barbs" do not belong in Barbus sensu stricto – indeed, they are as distant from the typical barbels and the typical carps (Cyprinus) as these are from Garra (which is placed in the Labeoninae by most who accept the latter as distinct) and thus might form another as yet unnamed subfamily. However, as noted above, how various minor lineages tie into this has not yet been resolved; therefore, such a radical move, though reasonable, is probably premature. The tench (Tinca tinca), a significant food species farmed in western Eurasia in large numbers, is unusual. It is most often grouped with the Leuciscinae, but even when these were rather loosely circumscribed, it always stood apart. A cladistic analysis of DNA sequence data of the S7 ribosomal protein intron1 supports the view that it is distinct enough to constitute a monotypic subfamily. It also suggests it may be closer to the small East Asian Aphyocypris, Hemigrammocypris, and Yaoshanicus. They would have diverged roughly at the same time from cyprinids of east-central Asia, perhaps as a result of the Alpide orogeny that vastly changed the topography of that region in the late Paleogene, when their divergence presumably occurred. A DNA-based analysis of these fish places the Rasborinae as the basal lineage with the Cyprininae as a sister clade to the Leuciscinae. The subfamilies Acheilognathinae, Gobioninae, and Leuciscinae are monophyletic. Subfamilies and genera Eschmeyer's Catalog of Fishes sets out the subfamilies and genera within the family Cyprinidae as follows: Subfamily Acrossocheilinae L. Yang et al, 2015 Acrossocheilus Oshima, 1919 Folifer H. W. Wu, 1977 Onychostoma Günther, 1896 Subfamily Barbinae Bleeker, 1859 Aulopyge Heckel, 1841 Barbus Daudin, 1805 Caecocypris Banister & Bunni, 1980 Capoeta Valenciennes, 1842 Cyprinion Heckel, 1843 Kantaka Hora, 1942 Luciobarbus Heckel, 1843 Paracapoeta Turan, Kaya, Aksu, Bektaş, 2022 Scaphiodonichthys Vinciguerra, 1890 Schizocypris Regan, 1914 Semiplotus Bleeker, 1860 Subfamily Cyprininae Rafinesque, 1815 Aaptosyax Rainboth, 1991 Albulichthys Bleeker, 1860 Amblyrhynchichthys Bleeker, 1860 Balantiocheilos Bleeker, 1860 Carassioides Oshima, 1926 Carassius Jarocki, 1822 Cosmochilus Sauvage, 1878 Cyclocheilichthys Bleeker, 1859 Cyclocheilos Bleeker, 1859 Cyprinus Linnaeus, 1758 Discherodontus Rainboth, 1989 Eirmotus Schultz, 1959 Hypsibarbus Rainboth, 1996 Kalimantania Bănărescu, 1980 Laocypris Kottelat, 2000 Luciocyprinus Vaillant, 1904 Mystacoleucus Günther, 1868 Neobarynotus Bănărescu, 1980 Parasikukia Doi, 2000 Paraspinibarbus X.-L. Chu & Kottelat, 1989 Parator H. W. Wu, G. R. Yang, P. Q. Yue & H. J. Huang, 1963 Poropuntius H. M. Smith, 1931 Procypris S.-Y. Lin, 1933 Pseudosinocyclocheilus C.-G. Zhang & Y.-H. Zhao, 2016 Puntioplites H. M. Smith, 1929 Rohteichthys Bleeker, 1860 Sawbwa Annandale, 1918 Scaphognathops H.M. Smith, 1945 Sikukia H. M. Smith, 1931 Sinocyclocheilus P.-W. Fang, 1936 Troglocyclocheilus Kottelat & Bréhier, 1999 Typhlobarbus X.-L. Chu & W.-R. Chen, 1982 Subfamily Labeoninae Bleeker, 1859 Ageneiogarra Garman, 1912 Altigena Burton, 1934 Bangana Hamilton, 1822 Barbichthys Bleeker, 1860 Ceratogarra Kottelat, 2020 Cirrhinus Oken, 1817 Cophecheilus Y. Zhu, E. Zhang, M. Zhang & Y. Q. Han, 2011 Crossocheilus Kuhl & van Hasselt, 1823 Decorus Zheng, Chen & Yang, 2019 Diplocheilichthys Bleeker, 1859 Discocheilus E. Zhang, 1997 Discogobio S. Y. Lin, 1931 Epalzeorhynchos Bleeker, 1855 Fivepearlus C.-Q. Li, H. Yang, W. Li & H. Chen 2017 Garra Hamilton, 1822 Garroides V. H. Nguyễn & T.H. N. Vu, 2014 Guigarra Z.-B. Wang, X.-Y. Chen & L.-P. Zheng 2022 Gymnostomus Heckel, 1843 Henicorhynchus H. M. Smith, 1945 Hongshuia E. Zhang, X. Qiang & J. H. Lan, 2008 Incisilabeo Fowler, 1937 Labeo Cuvier, 1816 Labiobarbus van Hasselt, 1823 Lanlabeo M. Yao, Y. He & Z.-G. Peng, 2018 Linichthys E. Zhang & Fang, 2005 Lobocheilos Bleeker, 1854 Longanalus W. X. Li, 2006 Mekongina Fowler, 1937 Osteochilus Günther, 1868 Paracrossochilus Popta, 1904 Parapsilorhynchus Hora, 1921 Paraqianlabeo H.-T. Zhao, Sullivan, Y.-G. Zhang & Z.-G. Peng 2014 Parasinilabeo H. W. Wu, 1939 Placocheilus H.-W. Wu, 1977 Prolixicheilus L.-P. Zheng, X.-Y. Chen & J.-X. Yang, 2016 Protolabeo L. An, B. S. Liu, Y. H. Zhao & C. G. Zhang, 2010 Pseudocrossocheilus E. Zhang & J.-X. Chen, 1997 Pseudogyrinocheilus P.-W. Fang, 1933 Pseudoplacocheilus X. Li, W. Zhou, C. Sun & X. Yun, 2024 Ptychidio Myers, 1930 Qianlabeo E. Zhang & Yi-Yu Chen, 2004 Rectoris S.-Y. Lin, 1935 Schismatorhynchos Bleeker, 1855 Semilabeo Peters, 1881 Sinigarra E. Zhang & W. Zhou, 2012 Sinilabeo Rendahl, 1933 Sinocrossocheilus H.-W. Wu, 1977 Speolabeo Kottelat, 2017 Stenorynchoacrum Y. F. Huang, J. X. Yang & X. Y. Chen, 2014 Supradiscus X. Li, W. Zhou, C. Sun & X. Yun, 2024 Tariqilabeo Mirza & Saboohi, 1990 Thynnichthys Bleeker, 1859 Vinagarra V. H. Nguyễn & T. A. Bùi, 2009 Zuojiangia L.-P. Zheng, Y. He, J. X. Yang & L.B. Wu 2018 Subfamily Probarbinae L. Yang et al, 2015 Catlocarpio Boulenger, 1898 Probarbus Sauvage, 1880 Subfamily Schizopygopsinae Mirza, 1991 Oxygymnocypris W. H. Tsao, 1964 Ptychobarbus Steindachner, 1866 Schizopygopsis Steindachner, 1866 Subfamily Schizothoracinae McClelland, 1842 Aspiorhynchus Kessler, 1879 Diptychus Steindachner, 1866 Percocypris Y. T. Chu, 1935 Schizopyge Heckel, 1847 Schizothorax Heckel, 1838 Subfamily Smiliogastrinae Bleeker, 1863 Amatolacypris Skelton, Swartz & Vreven, 2018 Barbodes Bleeker, 1859 Barboides Brüning, 1929 Bhava Sudasinghe, Rüber & Meegaskumbura, 2023 Caecobarbus Boulenger, 1921 Chagunius H.M. Smith, 1938 Cheilobarbus A. Smith 1841 Clypeobarbus Fowler, 1936 Coptostomabarbus David & Poll 1937 Dawkinsia Pethiyagoda, Meegaskumbura & Maduwage, 2012 Desmopuntius Kottelat, 2013 Eechathalakenda Menon, 1999 Enteromius Cope, 1867 Gymnodiptychus Herzenstein, 1892 Haludaria Pethiyagoda, 2013 Hampala Kuhl & van Hasselt, 1823 Namaquacypris Skelton, Swartz & Vreven, 2018 Oliotius Kottelat, 2013 Oreichthys H. M. Smith, 1933 Osteobrama Heckel, 1843 Pethia Pethiyagoda, Meegaskumbura & Maduwage, 2012 Plesiopuntius Sudasinghe, Rüber & Meegaskumbura, 2023 Prolabeo Norman, 1932 Prolabeops Schultz, 1941 Pseudobarbus A. Smith, 1841 Puntigrus Kottelat, 2013 Puntius Hamilton, 1822 Rohanella Sudasinghe, Rüber & Meegaskumbura, 2023 Rohtee Sykes 1839 Sedercypris Skelton, Swartz & Vreven, 2018 Striuntius Kottelat, 2013 Systomus McClelland, 1838 Waikhomia Katwate, Kumkar, Raghavan & Dahanukar, 2020 Xenobarbus Norman, 1923 Subfamily Spinibarbinae Yang et al, 2015 Spinibarbichthys Oshima, 1926 Spinibarbus Oshima, 1919 Subfamily Torinae Karaman, 1971 Acapoeta Cockerell, 1910 Arabibarbus Borkenhagen, 2014 Atlantor Borkenhagen & Freyhof, 2023 Carasobarbus Karaman, 1971 Hypselobarbus Bleeker, 1860 Labeobarbus Rüppell, 1835 Lepidopygopsis B. S. Raj 1941 Mesopotamichthys Karaman, 1971 Naziritor Mirza & Javed, 1985 Neolissochilus Rainboth, 1985 Osteochilichthys Hora, 1942 Pterocapoeta Günther, 1902 Sanagia Holly, 1926 Tor Gray, 1834 With such a large and diverse family the taxonomy and phylogenies are always being worked on so alternative classifications are being created as new information is discovered, for example: Phylogeny
Biology and health sciences
Cypriniformes
Animals
7330
https://en.wikipedia.org/wiki/Complementary%20DNA
Complementary DNA
In genetics, complementary DNA (cDNA) is DNA that was reverse transcribed (via reverse transcriptase) from an RNA (e.g., messenger RNA or microRNA). cDNA exists in both single-stranded and double-stranded forms and in both natural and engineered forms. In engineered forms, it often is a copy (replicate) of the naturally occurring DNA from any particular organism's natural genome; the organism's own mRNA was naturally transcribed from its DNA, and the cDNA is reverse transcribed from the mRNA, yielding a duplicate of the original DNA. Engineered cDNA is often used to express a specific protein in a cell that does not normally express that protein (i.e., heterologous expression), or to sequence or quantify mRNA molecules using DNA based methods (qPCR, RNA-seq). cDNA that codes for a specific protein can be transferred to a recipient cell for expression as part of recombinant DNA, often bacterial or yeast expression systems. cDNA is also generated to analyze transcriptomic profiles in bulk tissue, single cells, or single nuclei in assays such as microarrays, qPCR, and RNA-seq. In natural forms, cDNA is produced by retroviruses (such as HIV-1, HIV-2, simian immunodeficiency virus, etc.) and then integrated into the host's genome, where it creates a provirus. The term cDNA is also used, typically in a bioinformatics context, to refer to an mRNA transcript's sequence, expressed as DNA bases (deoxy-GCAT) rather than RNA bases (GCAU). Patentability of cDNA was a subject of a 2013 US Supreme Court decision in Association for Molecular Pathology v. Myriad Genetics, Inc. As a compromise, the Court declared, that exons-only cDNA is patent-eligible, whereas isolated sequences of naturally occurring DNA comprising introns are not. Synthesis RNA serves as a template for cDNA synthesis. In cellular life, cDNA is generated by viruses and retrotransposons for integration of RNA into target genomic DNA. In molecular biology, RNA is purified from source material after genomic DNA, proteins and other cellular components are removed. cDNA is then synthesized through in vitro reverse transcription. RNA purification RNA is transcribed from genomic DNA in host cells and is extracted by first lysing cells then purifying RNA utilizing widely used methods such as phenol-chloroform, silica column, and bead-based RNA extraction methods. Extraction methods vary depending on the source material. For example, extracting RNA from plant tissue requires additional reagents, such as polyvinylpyrrolidone (PVP), to remove phenolic compounds, carbohydrates, and other compounds that will otherwise render RNA unusable. To remove DNA and proteins, enzymes such as DNase and Proteinase K are used for degradation. Importantly, RNA integrity is maintained by inactivating RNases with chaotropic agents such as guanidinium isothiocyanate, sodium dodecyl sulphate (SDS), phenol or chloroform. Total RNA is then separated from other cellular components and precipitated with alcohol. Various commercial kits exist for simple and rapid RNA extractions for specific applications. Additional bead-based methods can be used to isolate specific sub-types of RNA (e.g. mRNA and microRNA) based on size or unique RNA regions. Reverse transcription First-strand synthesis Using a reverse transcriptase enzyme and purified RNA templates, one strand of cDNA is produced (first-strand cDNA synthesis). The M-MLV reverse transcriptase from the Moloney murine leukemia virus is commonly used due to its reduced RNase H activity suited for transcription of longer RNAs. The AMV reverse transcriptase from the avian myeloblastosis virus may also be used for RNA templates with strong secondary structures (i.e. high melting temperature). cDNA is commonly generated from mRNA for gene expression analyses such as RT-qPCR and RNA-seq. mRNA is selectively reverse transcribed using oligo-dT primers that are the reverse complement of the poly-adenylated tail on the 3' end of all mRNA. The oligo-dT primer anneals to the poly-adenylated tail of the mRNA to serve as a binding site for the reverse transcriptase to begin reverse transcription. An optimized mixture of oligo-dT and random hexamer primers increases the chance of obtaining full-length cDNA while reducing 5' or 3' bias. Ribosomal RNA may also be depleted to enrich both mRNA and non-poly-adenylated transcripts such as some non-coding RNA. Second-strand synthesis The result of first-strand syntheses, RNA-DNA hybrids, can be processed through multiple second-strand synthesis methods or processed directly in downstream assays. An early method known as hairpin-primed synthesis relied on hairpin formation on the 3' end of the first-strand cDNA to prime second-strand synthesis. However, priming is random and hairpin hydrolysis leads to loss of information. The Gubler and Hoffman Procedure uses E. Coli RNase H to nick mRNA that is replaced with E. Coli DNA Polymerase I and sealed with E. Coli DNA Ligase. An optimization of this procedure relies on low RNase H activity of M-MLV to nick mRNA with remaining RNA later removed by adding RNase H after DNA Polymerase translation of the second-strand cDNA. This prevents lost sequence information at the 5' end of the mRNA. Applications Complementary DNA is often used in gene cloning or as gene probes or in the creation of a cDNA library. When scientists transfer a gene from one cell into another cell in order to express the new genetic material as a protein in the recipient cell, the cDNA will be added to the recipient (rather than the entire gene), because the DNA for an entire gene may include DNA that does not code for the protein or that interrupts the coding sequence of the protein (e.g., introns). Partial sequences of cDNAs are often obtained as expressed sequence tags. With amplification of DNA sequences via polymerase chain reaction (PCR) now commonplace, one will typically conduct reverse transcription as an initial step, followed by PCR to obtain an exact sequence of cDNA for intra-cellular expression. This is achieved by designing sequence-specific DNA primers that hybridize to the 5' and 3' ends of a cDNA region coding for a protein. Once amplified, the sequence can be cut at each end with nucleases and inserted into one of many small circular DNA sequences known as expression vectors. Such vectors allow for self-replication, inside the cells, and potentially integration in the host DNA. They typically also contain a strong promoter to drive transcription of the target cDNA into mRNA, which is then translated into protein. cDNA is also used to study gene expression via methods such as RNA-seq or RT-qPCR. For sequencing, RNA must be fragmented due to sequencing platform size limitations. Additionally, second-strand synthesized cDNA must be ligated with adapters that allow cDNA fragments to be PCR amplified and bind to sequencing flow cells. Gene-specific analysis methods commonly use microarrays and RT-qPCR to quantify cDNA levels via fluorometric and other methods. On 13 June 2013, the United States Supreme Court ruled in the case of Association for Molecular Pathology v. Myriad Genetics that while naturally occurring genes cannot be patented, cDNA is patent-eligible because it does not occur naturally. Viruses and retrotransposons Some viruses also use cDNA to turn their viral RNA into mRNA (viral RNA → cDNA → mRNA). The mRNA is used to make viral proteins to take over the host cell. An example of this first step from viral RNA to cDNA can be seen in the HIV cycle of infection. Here, the host cell membrane becomes attached to the virus' lipid envelope which allows the viral capsid with two copies of viral genome RNA to enter the host. The cDNA copy is then made through reverse transcription of the viral RNA, a process facilitated by the chaperone CypA and a viral capsid associated reverse transcriptase. cDNA is also generated by retrotransposons in eukaryotic genomes. Retrotransposons are mobile genetic elements that move themselves within, and sometimes between, genomes via RNA intermediates. This mechanism is shared with viruses with the exclusion of the generation of infectious particles.
Biology and health sciences
Molecular biology
Biology
7346
https://en.wikipedia.org/wiki/Centimetre%E2%80%93gram%E2%80%93second%20system%20of%20units
Centimetre–gram–second system of units
The centimetre–gram–second system of units (CGS or cgs) is a variant of the metric system based on the centimetre as the unit of length, the gram as the unit of mass, and the second as the unit of time. All CGS mechanical units are unambiguously derived from these three base units, but there are several different ways in which the CGS system was extended to cover electromagnetism. The CGS system has been largely supplanted by the MKS system based on the metre, kilogram, and second, which was in turn extended and replaced by the International System of Units (SI). In many fields of science and engineering, SI is the only system of units in use, but CGS is still prevalent in certain subfields. In measurements of purely mechanical systems (involving units of length, mass, force, energy, pressure, and so on), the differences between CGS and SI are straightforward: the unit-conversion factors are all powers of 10 as and . For example, the CGS unit of force is the dyne, which is defined as , so the SI unit of force, the newton (), is equal to . On the other hand, in measurements of electromagnetic phenomena (involving units of charge, electric and magnetic fields, voltage, and so on), converting between CGS and SI is less straightforward. Formulas for physical laws of electromagnetism (such as Maxwell's equations) take a form that depends on which system of units is being used, because the electromagnetic quantities are defined differently in SI and in CGS. Furthermore, within CGS, there are several plausible ways to define electromagnetic quantities, leading to different "sub-systems", including Gaussian units, "ESU", "EMU", and Heaviside–Lorentz units. Among these choices, Gaussian units are the most common today, and "CGS units" is often intended to refer to CGS-Gaussian units. History The CGS system goes back to a proposal in 1832 by the German mathematician Carl Friedrich Gauss to base a system of absolute units on the three fundamental units of length, mass and time. Gauss chose the units of millimetre, milligram and second. In 1873, a committee of the British Association for the Advancement of Science, including physicists James Clerk Maxwell and William Thomson, 1st Baron Kelvin recommended the general adoption of centimetre, gram and second as fundamental units, and to express all derived electromagnetic units in these fundamental units, using the prefix "C.G.S. unit of ...". The sizes of many CGS units turned out to be inconvenient for practical purposes. For example, many everyday objects are hundreds or thousands of centimetres long, such as humans, rooms and buildings. Thus the CGS system never gained wide use outside the field of science. Starting in the 1880s, and more significantly by the mid-20th century, CGS was gradually superseded internationally for scientific purposes by the MKS (metre–kilogram–second) system, which in turn developed into the modern SI standard. Since the international adoption of the MKS standard in the 1940s and the SI standard in the 1960s, the technical use of CGS units has gradually declined worldwide. CGS units have been deprecated in favor of SI units by NIST, as well as organizations such as the American Physical Society and the International Astronomical Union. SI units are predominantly used in engineering applications and physics education, while Gaussian CGS units are still commonly used in theoretical physics, describing microscopic systems, relativistic electrodynamics, and astrophysics. The units gram and centimetre remain useful as noncoherent units within the SI system, as with any other prefixed SI units. Definition of CGS units in mechanics In mechanics, the quantities in the CGS and SI systems are defined identically. The two systems differ only in the scale of the three base units (centimetre versus metre and gram versus kilogram, respectively), with the third unit (second) being the same in both systems. There is a direct correspondence between the base units of mechanics in CGS and SI. Since the formulae expressing the laws of mechanics are the same in both systems and since both systems are coherent, the definitions of all coherent derived units in terms of the base units are the same in both systems, and there is an unambiguous relationship between derived units:   (definition of velocity)   (Newton's second law of motion)   (energy defined in terms of work)   (pressure defined as force per unit area)   (dynamic viscosity defined as shear stress per unit velocity gradient). Thus, for example, the CGS unit of pressure, barye, is related to the CGS base units of length, mass, and time in the same way as the SI unit of pressure, pascal, is related to the SI base units of length, mass, and time: 1 unit of pressure = 1 unit of force / (1 unit of length)2 = 1 unit of mass / (1 unit of length × (1 unit of time)2) 1 Ba = 1 g/(cm⋅s2) 1 Pa = 1 kg/(m⋅s2). Expressing a CGS derived unit in terms of the SI base units, or vice versa, requires combining the scale factors that relate the two systems: 1 Ba = 1 g/(cm⋅s2) = 10−3 kg / (10−2 m⋅s2) = 10−1 kg/(m⋅s2) = 10−1 Pa. Definitions and conversion factors of CGS units in mechanics Derivation of CGS units in electromagnetism CGS approach to electromagnetic units The conversion factors relating electromagnetic units in the CGS and SI systems are made more complex by the differences in the formulas expressing physical laws of electromagnetism as assumed by each system of units, specifically in the nature of the constants that appear in these formulas. This illustrates the fundamental difference in the ways the two systems are built: In SI, the unit of electric current, the ampere (A), was historically defined such that the magnetic force exerted by two infinitely long, thin, parallel wires 1 metre apart and carrying a current of 1 ampere is exactly . This definition results in all SI electromagnetic units being numerically consistent (subject to factors of some integer powers of 10) with those of the CGS-EMU system described in further sections. The ampere is a base unit of the SI system, with the same status as the metre, kilogram, and second. Thus the relationship in the definition of the ampere with the metre and newton is disregarded, and the ampere is not treated as dimensionally equivalent to any combination of other base units. As a result, electromagnetic laws in SI require an additional constant of proportionality (see Vacuum permeability) to relate electromagnetic units to kinematic units. (This constant of proportionality is derivable directly from the above definition of the ampere.) All other electric and magnetic units are derived from these four base units using the most basic common definitions: for example, electric charge q is defined as current I multiplied by time t, resulting in the unit of electric charge, the coulomb (C), being defined as 1 C = 1 A⋅s. The CGS system variant avoids introducing new base quantities and units, and instead defines all electromagnetic quantities by expressing the physical laws that relate electromagnetic phenomena to mechanics with only dimensionless constants, and hence all units for these quantities are directly derived from the centimetre, gram, and second. In each of these systems the quantities called "charge" etc. may be a different quantity; they are distinguished here by a superscript. The corresponding quantities of each system are related through a proportionality constant. Maxwell's equations can be written in each of these systems as: Electrostatic units (ESU) In the electrostatic units variant of the CGS system, (CGS-ESU), charge is defined as the quantity that obeys a form of Coulomb's law without a multiplying constant (and current is then defined as charge per unit time): The ESU unit of charge, franklin (Fr), also known as statcoulomb or esu charge, is therefore defined as follows: Therefore, in CGS-ESU, a franklin is equal to a centimetre times square root of dyne: The unit of current is defined as: In the CGS-ESU system, charge q is therefore has the dimension to M1/2L3/2T−1. Other units in the CGS-ESU system include the statampere (1 statC/s) and statvolt (1 erg/statC). In CGS-ESU, all electric and magnetic quantities are dimensionally expressible in terms of length, mass, and time, and none has an independent dimension. Such a system of units of electromagnetism, in which the dimensions of all electric and magnetic quantities are expressible in terms of the mechanical dimensions of mass, length, and time, is traditionally called an 'absolute system'.:3 Unit symbols All electromagnetic units in the CGS-ESU system that have not been given names of their own are named as the corresponding SI name with an attached prefix "stat" or with a separate abbreviation "esu", and similarly with the corresponding symbols. Electromagnetic units (EMU) In another variant of the CGS system, electromagnetic units (EMU), current is defined via the force existing between two thin, parallel, infinitely long wires carrying it, and charge is then defined as current multiplied by time. (This approach was eventually used to define the SI unit of ampere as well). The EMU unit of current, biot (Bi), also known as abampere or emu current, is therefore defined as follows: Therefore, in electromagnetic CGS units, a biot is equal to a square root of dyne: The unit of charge in CGS EMU is: Dimensionally in the CGS-EMU system, charge q is therefore equivalent to M1/2L1/2. Hence, neither charge nor current is an independent physical quantity in the CGS-EMU system. EMU notation All electromagnetic units in the CGS-EMU system that do not have proper names are denoted by a corresponding SI name with an attached prefix "ab" or with a separate abbreviation "emu". Practical CGS units The practical CGS system is a hybrid system that uses the volt and the ampere as the units of voltage and current respectively. Doing this avoids the inconveniently large and small electrical units that arise in the esu and emu systems. This system was at one time widely used by electrical engineers because the volt and ampere had been adopted as international standard units by the International Electrical Congress of 1881. As well as the volt and ampere, the farad (capacitance), ohm (resistance), coulomb (electric charge), and henry (inductance) are consequently also used in the practical system and are the same as the SI units. The magnetic units are those of the emu system. The electrical units, other than the volt and ampere, are determined by the requirement that any equation involving only electrical and kinematical quantities that is valid in SI should also be valid in the system. For example, since electric field strength is voltage per unit length, its unit is the volt per centimetre, which is one hundred times the SI unit. The system is electrically rationalized and magnetically unrationalized; i.e., and , but the above formula for is invalid. A closely related system is the International System of Electric and Magnetic Units, which has a different unit of mass so that the formula for ′ is invalid. The unit of mass was chosen to remove powers of ten from contexts in which they were considered to be objectionable (e.g., and ). Inevitably, the powers of ten reappeared in other contexts, but the effect was to make the familiar joule and watt the units of work and power respectively. The ampere-turn system is constructed in a similar way by considering magnetomotive force and magnetic field strength to be electrical quantities and rationalizing the system by dividing the units of magnetic pole strength and magnetization by 4. The units of the first two quantities are the ampere and the ampere per centimetre respectively. The unit of magnetic permeability is that of the emu system, and the magnetic constitutive equations are and . Magnetic reluctance is given a hybrid unit to ensure the validity of Ohm's law for magnetic circuits. In all the practical systems ε0 = 8.8542 × 10−14 A⋅s/(V⋅cm), μ0 = 1 V⋅s/(A⋅cm), and c2 = 1/(4π × 10−9 ε0μ0). Other variants There were at various points in time about half a dozen systems of electromagnetic units in use, most based on the CGS system. These include the Gaussian units and the Heaviside–Lorentz units. Electromagnetic units in various CGS systems In this table, c = is the numeric value of the speed of light in vacuum when expressed in units of centimetres per second. The symbol "≘" is used instead of "=" as a reminder that the units are corresponding but not equal. For example, according to the capacitance row of the table, if a capacitor has a capacitance of 1 F in SI, then it has a capacitance of (10−9 c2) cm in ESU; but it is incorrect to replace "1 F" with "(10−9 c2) cm" within an equation or formula. (This warning is a special aspect of electromagnetism units. By contrast it is always correct to replace, e.g., "1 m" with "100 cm" within an equation or formula.) Physical constants in CGS units Advantages and disadvantages Lack of unique unit names leads to potential confusion: "15 emu" may mean either 15 abvolts, or 15 emu units of electric dipole moment, or 15 emu units of magnetic susceptibility, sometimes (but not always) per gram, or per mole. With its system of uniquely named units, the SI removes any confusion in usage: 1 ampere is a fixed value of a specified quantity, and so are 1 henry, 1 ohm, and 1 volt. In the CGS-Gaussian system, electric and magnetic fields have the same units, 40 is replaced by 1, and the only dimensional constant appearing in the Maxwell equations is c, the speed of light. The Heaviside–Lorentz system has these properties as well (with ε0 equaling 1). In SI, and other rationalized systems (for example, Heaviside–Lorentz), the unit of current was chosen such that electromagnetic equations concerning charged spheres contain 4, those concerning coils of current and straight wires contain 2 and those dealing with charged surfaces lack entirely, which was the most convenient choice for applications in electrical engineering and relates directly to the geometric symmetry of the system being described by the equation. Specialized unit systems are used to simplify formulas further than either SI or CGS do, by eliminating constants through a convention of normalizing quantities with respect to some system of natural units. For example, in particle physics a system is in use where every quantity is expressed by only one unit of energy, the electronvolt, with lengths, times, and so on all converted into units of energy by inserting factors of speed of light c and the reduced Planck constant ħ. This unit system is convenient for calculations in particle physics, but is impractical in other contexts.
Physical sciences
Measurement systems
Basics and measurement
7376
https://en.wikipedia.org/wiki/Cosmic%20microwave%20background
Cosmic microwave background
The cosmic microwave background (CMB, CMBR), or relic radiation, is microwave radiation that fills all space in the observable universe. With a standard optical telescope, the background space between stars and galaxies is almost completely dark. However, a sufficiently sensitive radio telescope detects a faint background glow that is almost uniform and is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the electromagnetic spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s. The CMB is landmark evidence of the Big Bang theory for the origin of the universe. In the Big Bang cosmological models, during the earliest periods, the universe was filled with an opaque fog of dense, hot plasma of sub-atomic particles. As the universe expanded, this plasma cooled to the point where protons and electrons combined to form neutral atoms of mostly hydrogen. Unlike the plasma, these atoms could not scatter thermal radiation by Thomson scattering, and so the universe became transparent. Known as the recombination epoch, this decoupling event released photons to travel freely through space. However, the photons have grown less energetic due to the cosmological redshift associated with the expansion of the universe. The surface of last scattering refers to a shell at the right distance in space so photons are now received that were originally emitted at the time of decoupling. The CMB is not completely smooth and uniform, showing a faint anisotropy that can be mapped by sensitive detectors. Ground and space-based experiments such as COBE, WMAP and Planck have been used to measure these temperature inhomogeneities. The anisotropy structure is determined by various interactions of matter and photons up to the point of decoupling, which results in a characteristic lumpy pattern that varies with angular scale. The distribution of the anisotropy across the sky has frequency components that can be represented by a power spectrum displaying a sequence of peaks and valleys. The peak values of this spectrum hold important information about the physical properties of the early universe: the first peak determines the overall curvature of the universe, while the second and third peak detail the density of normal matter and so-called dark matter, respectively. Extracting fine details from the CMB data can be challenging, since the emission has undergone modification by foreground features such as galaxy clusters. Features The cosmic microwave background radiation is an emission of uniform black body thermal energy coming from all directions. Intensity of the CMB is expressed in kelvin (K), the SI unit of temperature. The CMB has a thermal black body spectrum at a temperature of . Variations in intensity are expressed as variations in temperature. The blackbody temperature uniquely characterizes the intensity of the radiation at all wavelengths; a measured brightness temperature at any wavelength can be converted to a blackbody temperature. The radiation is remarkably uniform across the sky, very unlike the almost point-like structure of stars or clumps of stars in galaxies. The radiation is isotropic to roughly one part in 25,000: the root mean square variations are just over 100 μK, after subtracting a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at 369.82 ± 0.11 km/s towards the constellation Crater near its boundary with the constellation Leo The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion. Despite the very small degree of anisotropy in the CMB, many aspects can be measured with high precision and such measurements are critical for cosmological theories. In addition to temperature anisotropy, the CMB should have an angular variation in polarization. The polarization at each direction in the sky has an orientation described in terms of E-mode and B-mode polarization. The E-mode signal is a factor of 10 less strong than the temperature anisotropy; it supplements the temperature data as they are correlated. The B-mode signal is even weaker but may contain additional cosmological data. The anisotropy is related to physical origin of the polarization. Excitation of an electron by linear polarized light generates polarized light at 90 degrees to the incident direction. If the incoming radiation is isotropic, different incoming directions create polarizations that cancel out. If the incoming radiation has quadrupole anisotropy, residual polarization will be seen. Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time. The CMB contains the vast majority of photons in the universe by a factor of 400 to 1; the number density of photons in the CMB is one billion times (109) the number density of matter in the universe. Without the expansion of the universe to cause the cooling of the CMB, the night sky would shine as brightly as the Sun. The energy density of the CMB is , about 411 photons/cm3. History Early speculations In 1931, Georges Lemaître speculated that remnants of the early universe may be observable as radiation, but his candidate was cosmic rays. Richard C. Tolman showed in 1934 that expansion of the universe would cool blackbody radiation while maintaining a thermal spectrum. The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in a correction they prepared for a paper by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K. Discovery The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Robert H. Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. The antenna was constructed in 1959 to support Project Echo—the National Aeronautics and Space Administration's passive communications satellites, which used large earth orbiting aluminized plastic balloons as reflectors to bounce radio signals from one point on the Earth to another. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery. Cosmic origin The interpretation of the cosmic microwave background was a controversial issue in the late 1960s. Alternative explanations included energy from within the solar system, from galaxies, from intergalactic plasma and from multiple extragalactic radio sources. Two requirements would show that the microwave radiation was truly "cosmic". First, the intensity vs frequency or spectrum needed to be shown to match a thermal or blackbody source. This was accomplished by 1968 in a series of measurements of the radiation temperature at higher and lower wavelengths. Second, the radiation needed be shown to be isotropic, the same from all directions. This was also accomplished by 1970, demonstrating that this radiation was truly cosmic in origin. Progress on theory In the 1970s numerous studies showed that tiny deviations from isotropy in the CMB could result from events in the early universe. Harrison, Peebles and Yu, and Zel'dovich realized that the early universe would require quantum inhomogeneities that would result in temperature anisotropy at the level of 10−4 or 10−5. Rashid Sunyaev, using the alternative name relic radiation, calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. COBE After a lull in the 1970s caused in part by the many experimental difficulties in measuring CMB at high precision, increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983), gave the first upper limits on the large-scale anisotropy. The other key event in the 1980s was the proposal by Alan Guth for cosmic inflation. This theory of rapid spatial expansion gave an explanation for large-scale isotropy by allowing causal connection just before the epoch of last scattering. With this and similar theories, detailed prediction encouraged larger and more ambitious experiments. The NASA Cosmic Background Explorer (COBE) satellite orbited Earth in 1989–1996 detected and quantified the large scale anisotropies at the limit of its detection capabilities. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery. Precision cosmology Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the two decades. The sensitivity of the new experiments improved dramatically, with a reduction in internal noise by three orders of magnitude. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the MAT/TOCO experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation. Observations after COBE Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. Wilkinson Microwave Anisotropy Probe In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers at five frequencies to minimize non-sky signal noise. The data from the mission was released in five installments, the last being the nine year summary. The results are broadly consistent Lambda CDM models based on 6 free parameters and fitting in to Big Bang cosmology with cosmic inflation. Degree Angular Scale Interferometer Atacama Cosmology Telescope Planck Surveyor A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10−30) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be . South Pole Telescope Theoretical models The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang event. Measurements of the CMB have made the inflationary Big Bang model the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory. In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. Long before the formation of stars and planets, the early universe was more compact, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation. The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to , it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly of the total density of the universe. Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature. Predictions based on the Big Bang model In the late 1940s Alpher and Herman reasoned that if there was a Big Bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to discover that the microwave background was actually there. According to standard cosmology, the CMB gives a snapshot of the hot early universe at the point in time when the temperature dropped enough to allow electrons and protons to form hydrogen atoms. This event made the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When this occurred some 380,000 years after the Big Bang, the temperature of the universe was about 3,000 K. This corresponds to an ambient energy of about , which is much less than the ionization energy of hydrogen. This epoch is generally known as the "time of last scattering" or the period of recombination or decoupling. Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1,089 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV): Tr = 2.725 K × (1 + z) The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred. Primary anisotropy The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer. The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density. The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures. Adiabatic density perturbationsIn an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons, etc.) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic. Isocurvature density perturbationsIn an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations. The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings. Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe, the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring. These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt. The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and thus when it was complete, the universe was roughly 487,000 years old. Late time anisotropy Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.) The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation. Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift around 10. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zeldovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. Alternative theories The standard cosmology that includes the Big Bang "enjoys considerable popularity among the practicing cosmologists" However, there are challenges to the standard big bang framework for explaining CMB data. In particular standard cosmology requires fine-tuning of some free parameters, with different values supported by different experimental data. As an example of the fine-tuning issue, standard cosmology cannot predict the present temperature of the relic radiation, . This value of is one of the best results of experimental cosmology and the steady state model can predict it. However, alternative models have their own set of problems and they have only made post-facto explanations of existing observations. Nevertheless, these alternatives have played an important historic role in providing ideas for and challenges to the standard explanation. Polarization The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-mode (or gradient-mode) and B-mode (or curl mode). This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. E-modes The E-modes arise from Thomson scattering in a heterogeneous plasma. E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). B-modes B-modes are expected to be an order of magnitude weaker than the E-modes. The former are not produced by standard scalar type perturbations, but are generated by gravitational waves during cosmic inflation shortly after the big bang. However, gravitational lensing of the stronger E-modes can also produce B-mode polarization. Detecting the original B-modes signal requires analysis of the contamination caused by lensing of the relatively strong E-mode signal. Primordial gravitational waves Models of "slow-roll" cosmic inflation in the early universe predicts primordial gravitational waves that would impact the polarisation of the cosmic microwave background, creating a specific pattern of B-mode polarization. Detection of this pattern would support the theory of inflation and their strength can confirm and exclude different models of inflation. Claims that this characteristic pattern of B-mode polarization had been measured by BICEP2 instrument were later attributed to cosmic dust due to new results of the Planck experiment. Gravitational lensing The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level. Multipole analysis The CMB angular anisotropies are usually presented in terms of power per multipole. The map of temperature across the sky, is written as coefficients of spherical harmonics, where the term measures the strength of the angular oscillation in , and ℓ is the multipole number while m is the azimuthal number. The azimuthal variation is not significant and is removed by applying the angular correlation function, giving power spectrum term  Increasing values of ℓ correspond to higher multipole moments of CMB, meaning more rapid variation with angle. CMBR monopole term (ℓ = 0) The monopole term, , is the constant isotropic mean temperature of the CMB, with one standard deviation confidence. This term must be measured with absolute temperature devices, such as the FIRAS instrument on the COBE satellite. CMBR dipole anisotropy (ℓ = 1) CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (), a cosine function. The amplitude of CMB dipole is around . The CMB dipole moment is interpreted as the peculiar motion of the Earth relative to the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information. From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude , . The dipole is now used to calibrate mapping studies. Multipole (ℓ ≥ 2) The temperature variation in the CMB temperature maps at higher multipoles, or , is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch at a redshift of around . Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. Data analysis challenges Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background. The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum. Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques. Anomalies With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole () modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data. Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things." Measurements of the density of quasars based on Wide-field Infrared Survey Explorer data finds a dipole significantly different from the one extracted from the CMB anisotropy. This difference is conflict with the cosmological principle. Future evolution Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay. Timeline of prediction, discovery and interpretation Thermal (non-microwave background) temperature predictions 1896 – Charles Édouard Guillaume estimates the "radiation of the stars" to be 5–6 K. 1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy "... by the formula the effective temperature corresponding to this density is 3.18° absolute ... black body". 1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K. 1931 – Term microwave first used in print: "When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1 1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal. 1946 – Robert Dicke predicts "... radiation from cosmic matter" at < 20 K, but did not refer to background radiation. 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation. 1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3 K and in the following year values of 1.9K and 6.0K. Microwave background radiation predictions and measurements 1941 – Andrew McKellar detected a "rotational" temperature of 2.3 K for the interstellar medium by comparing the population of CN doublet lines measured by W. S. Adams in a B star. 1948 – Ralph Alpher and Robert Herman estimate "the temperature in the universe" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred. 1953 – George Gamow estimates 7 K based on a model that does not rely on a free parameter 1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, initially reported a near-isotropic background radiation of 3 kelvins, plus or minus 2; he did not recognize the cosmological significance and later revised the error bars to 20K. 1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". with radiation intensity was independent of either time or direction of observation. Although Shamonov did not recognize it at the time, it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm 1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable. 1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the Big Bang. 1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs–Wolfe effect). 1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent wells of potential. 1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect). 1983 – Researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detect the Sunyaev–Zel'dovich effect from clusters of galaxies. 1983 – RELIKT-1 Soviet CMB anisotropy experiment was launched. 1990 – FIRAS on the Cosmic Background Explorer (COBE) satellite measures the black body form of the CMB spectrum with exquisite precision, and shows that the microwave background has a nearly perfect black-body spectrum with T = 2.73 K and thereby strongly constrains the density of the intergalactic medium. January 1992 – Scientists that analysed data from the RELIKT-1 report the discovery of anisotropy in the cosmic microwave background at the Moscow astrophysical seminar. 1992 – Scientists that analysed data from COBE DMR report the discovery of anisotropy in the cosmic microwave background. 1995 – The Cosmic Anisotropy Telescope performs the first high resolution observations of the cosmic microwave background. 1999 – First measurements of acoustic oscillations in the CMB anisotropy angular power spectrum from the MAT/TOCO, BOOMERANG, and Maxima Experiments. The BOOMERanG experiment makes higher quality maps at intermediate resolution, and confirms that the universe is "flat". 2002 – Polarization discovered by DASI. 2003 – E-mode polarization spectrum obtained by the CBI. The CBI and the Very Small Array produces yet higher quality maps at high resolution (covering small areas of the sky). 2003 – The Wilkinson Microwave Anisotropy Probe spacecraft produces an even higher quality map at low and intermediate resolution of the whole sky (WMAP provides high-resolution data, but improves on the intermediate resolution maps from BOOMERanG). 2004 – E-mode polarization spectrum obtained by the CBI. 2004 – The Arcminute Cosmology Bolometer Array Receiver produces a higher quality map of the high resolution structure not mapped by WMAP. 2005 – The Arcminute Microkelvin Imager and the Sunyaev–Zel'dovich Array begin the first surveys for very high redshift clusters of galaxies using the Sunyaev–Zel'dovich effect. 2005 – Ralph A. Alpher is awarded the National Medal of Science for his groundbreaking work in nucleosynthesis and prediction that the universe expansion leaves behind background radiation, thus providing a model for the Big Bang theory. 2006 – The long-awaited three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data. 2006 – Two of COBE's principal investigators, George Smoot and John Mather, received the Nobel Prize in Physics in 2006 for their work on precision measurement of the CMBR. 2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model. 2010 – The first all-sky map from the Planck telescope is released. 2013 – An improved all-sky map from the Planck telescope is released, improving the measurements of WMAP and extending them to much smaller scales. 2014 – On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported. 2015 – On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. 2018 – The final data and maps from the Planck telescope is released, with improved measurements of the polarization on large scales. 2019 – Planck telescope analyses of their final 2018 data continue to be released. In popular culture In the Stargate Universe TV series (2009–2011), an ancient spaceship, Destiny, was built to study patterns in the CMBR which is a sentient message left over from the beginning of time. In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe. In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself. The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds. In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background.
Physical sciences
Physical cosmology
null
7397
https://en.wikipedia.org/wiki/Color%20blindness
Color blindness
Color blindness or color vision deficiency (CVD) is the decreased ability to see color or differences in color. The severity of color blindness ranges from mostly unnoticeable to full absence of color perception. Color blindness is usually an inherited problem or variation in the functionality of one or more of the three classes of cone cells in the retina, which mediate color vision. The most common form is caused by a genetic condition called congenital red–green color blindness (including protan and deutan types), which affects up to 1 in 12 males (8%) and 1 in 200 females (0.5%). The condition is more prevalent in males, because the opsin genes responsible are located on the X chromosome. Rarer genetic conditions causing color blindness include congenital blue–yellow color blindness (tritan type), blue cone monochromacy, and achromatopsia. Color blindness can also result from physical or chemical damage to the eye, the optic nerve, parts of the brain, or from medication toxicity. Color vision also naturally degrades in old age. Diagnosis of color blindness is usually done with a color vision test, such as the Ishihara test. There is no cure for most causes of color blindness; however there is ongoing research into gene therapy for some severe conditions causing color blindness. Minor forms of color blindness do not significantly affect daily life and the color blind automatically develop adaptations and coping mechanisms to compensate for the deficiency. However, diagnosis may allow an individual, or their parents/teachers, to actively accommodate the condition. Color blind glasses (e.g. EnChroma) may help the red–green color blind at some color tasks, but they do not grant the wearer "normal color vision" or the ability to see "new" colors. Some mobile apps can use a device's camera to identify colors. Depending on the jurisdiction, the color blind are ineligible for certain careers, such as aircraft pilots, train drivers, police officers, firefighters, and members of the armed forces. The effect of color blindness on artistic ability is controversial, but a number of famous artists are believed to have been color blind. Effects A color blind person will have decreased (or no) color discrimination along the red–green axis, blue–yellow axis, or both. However, the vast majority of the color blind are only affected on their red–green axis. The first indication of color blindness generally consists of a person using the wrong color for an object, such as when painting, or calling a color by the wrong name. The colors that are confused are very consistent among people with the same type of color blindness. Confusion colors Confusion colors are pairs or groups of colors that will often be mistaken by the color blind. Confusion colors for red–green color blindness include: cyan and grey rose-pink and grey blue and purple yellow and neon green red, green, orange, brown Confusion colors for tritan include: yellow and grey blue and green dark blue/violet and black violet and yellow-green red and rose-pink These colors of confusion are defined quantitatively by straight confusion lines plotted in CIEXYZ, usually plotted on the corresponding chromaticity diagram. The lines all intersect at a copunctal point, which varies with the type of color blindness. Chromaticities along a confusion line will appear metameric to dichromats of that type. Anomalous trichromats of that type will see the chromaticities as metameric if they are close enough, depending on the strength of their CVD. For two colors on a confusion line to be metameric, the chromaticities first have to be made isoluminant, meaning equal in lightness. Also, colors that may be isoluminant to the standard observer may not be isoluminant to a person with dichromacy. Color tasks Cole describes four color tasks, all of which are impeded to some degree by color blindness: Comparative – When multiple colors must be compared, such as with mixing paint Connotative – When colors are given an implicit meaning, such as red = stop Denotative – When identifying colors, for example by name, such as "where is the yellow ball?" Aesthetic – When colors look nice – or convey an emotional response – but do not carry explicit meaning The following sections describe specific color tasks with which the color blind typically have difficulty. Food Color blindness causes difficulty with the connotative color tasks associated with selecting or preparing food. Selecting food for ripeness can be difficult; the green–yellow transition of bananas is particularly hard to identify. It can also be difficult to detect bruises, mold, or rot on some foods, to determine when meat is done by color, to distinguish some varietals, such as a Braeburn vs. a Granny Smith apple, or to distinguish colors associated with artificial flavors (e.g. jelly beans, sports drinks). Skin color Changes in skin color due to bruising, sunburn, rashes or even blushing are easily missed by the red–green color blind. Traffic lights The colors of traffic lights can be difficult for the red–green color blindness. This difficulty includes distinguishing red/amber lights from sodium street lamps, distinguishing green lights (closer to cyan) from normal white lights, and distinguishing red from amber lights, especially when there are no positional clues available (see image). The main coping mechanism to overcome these challenges is to memorize the position of lights. The order of the common triplet traffic light is standardized as red–amber–green from top to bottom or left to right. Cases that deviate from this standard are rare. One such case is a traffic light in Tipperary Hill in Syracuse, New York, which is upside-down (green–amber–red top to bottom) due to the sentiments of its Irish American community. However, the light has been criticized due to the potential hazard it poses for color blind drivers. There are other several features of traffic lights available that help accommodate the color blind. British Rail signals use more easily identifiable colors: The red is blood red, the amber is yellow and the green is a bluish color. Most British road traffic lights are mounted vertically on a black rectangle with a white border (forming a "sighting board"), so that drivers can more easily look for the position of the light. In the eastern provinces of Canada, traffic lights are sometimes differentiated by shape in addition to color: square for red, diamond for yellow, and circle for green (see image). Signal lights Navigation lights in marine and aviation settings employ red and green lights to signal the relative position of other ships or aircraft. Railway signal lights also rely heavily on red–green–yellow colors. In both cases, these color combinations can be difficult for the red–green color blind. Lantern Tests are a common means of simulating these light sources to determine not necessarily whether someone is color blind, but whether they can functionally distinguish these specific signal colors. Those who cannot pass this test are generally completely restricted from working on aircraft, ships or rail, for example. Fashion Color analysis is the analysis of color in its use in fashion, to determine personal color combinations that are most aesthetically pleasing. Colors to combine can include clothing, accessories, makeup, hair color, skin color, eye color, etc. Color analysis involves many aesthetic and comparative color task that can be difficult for the color blind. Art Inability to distinguish color does not necessarily preclude the ability to become a celebrated artist. The 20th century expressionist painter Clifton Pugh, three-time winner of Australia's Archibald Prize, on biographical, gene inheritance and other grounds has been identified as a person with protanopia. 19th century French artist Charles Méryon became successful by concentrating on etching rather than painting after he was diagnosed as having a red–green deficiency. Jin Kim's red–green color blindness did not stop him from becoming first an animator and later a character designer with Walt Disney Animation Studios. Advantages Deuteranomals are better at distinguishing shades of khaki, which may be advantageous when looking for predators, food, or camouflaged objects hidden among foliage. Dichromats tend to learn to use texture and shape clues and so may be able to penetrate camouflage that has been designed to deceive individuals with normal color vision. Some tentative evidence finds that the color blind are better at penetrating certain color camouflages. Such findings may give an evolutionary reason for the high rate of red–green color blindness. There is also a study suggesting that people with some types of color blindness can distinguish colors that people with normal color vision are not able to distinguish. In World War II, color blind observers were used to penetrate camouflage. In the presence of chromatic noise, the color blind are more capable of seeing a luminous signal, as long as the chromatic noise appears metameric to them. This is the effect behind most "reverse" Pseudoisochromatic plates (e.g. "hidden digit" Ishihara plates) that are discernible to the color blind but unreadable to people with typical color vision. Digital design Color codes are useful tools for designers to convey information. The interpretation of this information requires users to perform a variety of color tasks, usually comparative but also sometimes connotative or denotative. However, these tasks are often problematic for the color blind when design of the color code has not followed best practices for accessibility. For example, one of the most ubiquitous connotative color codes is the "red means bad and green means good" or similar systems, based on the classic signal light colors. However, this color coding will almost always be undifferentiable to deutans or protans, and can instead be supplemented with a parallel connotative system (symbols, smileys, etc.). Good practices to ensure design is accessible to the color blind include: When possible (e.g. in simple video games or apps), allowing the user to choose their own colors is the most inclusive design practice. Using other signals that are parallel to the color coding, such as patterns, shapes, size or order. This not only helps the color blind, but also aids understanding by normally sighted people by providing them with multiple reinforcing cues. Using brightness contrast (different shades) in addition to color contrast (different hues) To achieve good contrast, conventional wisdom suggests converting a (digital) design to grayscale to ensure there is sufficient brightness contrast between colors. However, this does not account for the different perceptions of brightness to different varieties of color blindness, especially protan CVD, tritan CVD and monochromacy. Viewing the design through a CVD Simulator to ensure the information carried by color is still sufficiently conveyed. At a minimum, the design should be tested for deutan CVD, the most common kind of color blindness. Maximizing the area of colors (e.g. increase size, thickness or boldness of colored element) makes the color easier to identify. Color contrast improves as the angle the color subtends on the retina increases. This applies to all types of color vision. Maximizing brightness (value) and saturation (chroma) of the colors to maximize color contrast. Converting connotative tasks to comparative tasks by including a legend, even when the meaning is considered obvious (e.g. red means danger). Avoiding denotative color tasks (color naming) when possible. Some denotative tasks can be converted to comparative tasks by depicting the actual color whenever the color name is mentioned; for example, colored typography in "", or "purple ()". For denotative tasks (color naming), using the most common shades of colors. For example, green and yellow are colors of confusion in red–green CVD, but it is not common to mix forest green () with bright yellow (). Mistakes by the color blind increase drastically when uncommon shades are used, e.g. neon green () with dark yellow (). For denotative tasks, using colors that are classically associated with a color name. For example, using "firetruck" red () instead of burgundy () to represent the word "red". Color selection in design A common task for designers is to select a subset of colors (qualitative colormap) that are as mutually differentiable as possible (salient). For example, player pieces in a board game should be as different as possible. Classic advice suggests using Brewer palettes, but several of these are not actually accessible to the color blind. An issue with color selection is that the colors with the greatest contrast to the red–green color blind tend to be colors of confusion to the blue–yellow color blind and vice versa. In 2018, UX designer Allie Ofisher published 3 color palettes with 6 colors each, distinguishable for all variants of color blindness. Sequential colormaps A common task for data visualization is to represent a color scale, or sequential colormap, often in the form of a heat map or choropleth. Several scales are designed with special consideration for the color blind and are widespread in academia, including Cividis, Viridis and Parula. These comprise a light-to-dark scale superimposed on a yellow-to-blue scale, making them monotonic and perceptually uniform to all forms of color vision. Classification Much terminology has existed and does exist for the classification of color blindness, but the typical classification for color blindness follows the von Kries classifications, which uses severity and affected cone for naming. Based on severity Based on clinical appearance, color blindness may be described as total or partial. Total color blindness (monochromacy) is much less common than partial color blindness. Partial color blindness includes dichromacy and anomalous trichromacy, but is often clinically defined as mild, moderate or strong. Monochromacy Monochromacy is often called total color blindness since there is no ability to see color. Although the term may refer to acquired disorders such as cerebral achromatopsia, it typically refers to congenital color vision disorders, namely rod monochromacy and blue cone monochromacy). In cerebral achromatopsia, a person cannot perceive colors even though the eyes are capable of distinguishing them. Some sources do not consider these to be true color blindness, because the failure is of perception, not of vision. They are forms of visual agnosia. Monochromacy is the condition of possessing only a single channel for conveying information about color. Monochromats are unable to distinguish any colors and perceive only variations in brightness. Congenital monochromacy occurs in two primary forms: Rod monochromacy, frequently called complete achromatopsia, where the retina contains no cone cells, so that in addition to the absence of color discrimination, vision in lights of normal intensity is difficult. Cone monochromacy is the condition of having only a single class of cone. A cone monochromat can have good pattern vision at normal daylight levels, but will not be able to distinguish hues. Cone monochromacy is divided into classes defined by the single remaining cone class. However, red and green cone monochromats have not been definitively described in the literature. Blue cone monochromacy is caused by lack of functionality of L (red) and M (green) cones, and is therefore mediated by the same genes as red–green color blindness (on the X chromosome). Peak spectral sensitivities are in the blue region of the visible spectrum (near 440 nm). People with this condition generally show nystagmus ("jiggling eyes"), photophobia (light sensitivity), reduced visual acuity, and myopia (nearsightedness). Visual acuity usually falls to the 20/50 to 20/400 range. Dichromacy Dichromats can match any color they see with some mixture of just two primary colors (in contrast to those with normal sight (trichromats) who can distinguish three primary colors). Dichromats usually know they have a color vision problem, and it can affect their daily lives. Dichromacy in humans includes protanopia, deuteranopia, and tritanopia. Out of the male population, 2% have severe difficulties distinguishing between red, orange, yellow, and green (orange and yellow are different combinations of red and green light). Colors in this range, which appear very different to a normal viewer, appear to a dichromat to be the same or a similar color. The terms protanopia, deuteranopia, and tritanopia come from Greek, and respectively mean "inability to see (anopia) with the first (prot-), second (deuter-), or third (trit-) [cone]". Anomalous trichromacy Anomalous trichromacy is the mildest type of color deficiency, but the severity ranges from almost dichromacy (strong) to almost normal trichromacy (mild). In fact, many mild anomalous trichromats have very little difficulty carrying out tasks that require normal color vision and some may not even be aware that they have a color vision deficiency. The types of anomalous trichromacy include protanomaly, deuteranomaly and tritanomaly. It is approximately three times more common than dichromacy. Anomalous trichromats exhibit trichromacy, but the color matches they make differ from normal trichromats. In order to match a given spectral yellow light, protanomalous observers need more red light in a red/green mixture than a normal observer, and deuteranomalous observers need more green. This difference can be measured by an instrument called an Anomaloscope, where red and green lights are mixed by a subject to match a yellow light. Based on affected cone There are two major types of color blindness: difficulty distinguishing between red and green, and difficulty distinguishing between blue and yellow. These definitions are based on the phenotype of the partial color blindness. Clinically, it is more common to use a genotypical definition, which describes which cone/opsin is affected. Red–green color blindness Red–green color blindness includes protan and deutan CVD. Protan CVD is related to the L-cone and includes protanomaly (anomalous trichromacy) and protanopia (dichromacy). Deutan CVD is related to the M-cone and includes deuteranomaly (anomalous trichromacy) and deuteranopia (dichromacy). The phenotype (visual experience) of deutans and protans is quite similar. Common colors of confusion include red/brown/green/yellow as well as blue/purple. Both forms are almost always symptomatic of congenital red–green color blindness, so affects males disproportionately more than females. This form of color blindness is sometimes referred to as daltonism after John Dalton, who had red–green dichromacy. In some languages, daltonism is still used to describe red–green color blindness. Protan (2% of males): Lacking, or possessing anomalous L-opsins for long-wavelength sensitive cone cells. Protans have a neutral point at a cyan-like wavelength around 492 nm (see spectral color for comparison)—that is, they cannot discriminate light of this wavelength from white. For a protanope, the brightness of red is much reduced compared to normal. This dimming can be so pronounced that reds may be confused with black or dark gray, and red traffic lights may appear to be extinguished. They may learn to distinguish reds from yellows primarily on the basis of their apparent brightness or lightness, not on any perceptible hue difference. Violet, lavender, and purple are indistinguishable from various shades of blue. A very few people have been found who have one normal eye and one protanopic eye. These unilateral dichromats report that with only their protanopic eye open, they see wavelengths shorter than neutral point as blue and those longer than it as yellow. Deutan (6% of males): Lacking, or possessing anomalous M-opsins for medium-wavelength sensitive cone cells. Their neutral point is at a slightly longer wavelength, 498 nm, a more greenish hue of cyan. Deutans have the same hue discrimination problems as protans, but without the dimming of long wavelengths. Deuteranopic unilateral dichromats report that with only their deuteranopic eye open, they see wavelengths shorter than neutral point as blue and longer than it as yellow. Blue–yellow color blindness Blue–yellow color blindness includes tritan CVD. Tritan CVD is related to the S-cone and includes tritanomaly (anomalous trichromacy) and tritanopia (dichromacy). Blue–yellow color blindness is much less common than red–green color blindness, and more often has acquired causes than genetic. Tritans have difficulty discerning between bluish and greenish hues. Tritans have a neutral point at 571 nm (yellowish). Tritan (< 0.01% of individuals): Lacking, or possessing anomalous S-opsins or short-wavelength sensitive cone cells. Tritans see short-wavelength colors (blue, indigo and spectral violet) as greenish and drastically dimmed, some of these colors even as black. Yellow and orange are indistinguishable from white and pink respectively, and purple colors are perceived as various shades of red. Unlike protans and deutans, the mutation for this color blindness is carried on chromosome 7. Therefore, it is not sex-linked (equally prevalent in both males and females). The OMIM gene code for this mutation is 304000 "Colorblindness, Partial Tritanomaly". Tetartan is a hypothetical "fourth type" of color blindness, and a type of blue–yellow color blindness. Given the molecular basis of human color vision, it is unlikely this type could exist. Summary of cone complements The below table shows the cone complements for different types of human color vision, including those considered color blindness, normal color vision and 'superior' color vision. The cone complement contains the types of cones (or their opsins) expressed by an individual. Causes Color blindness is any deviation of color vision from normal trichromatic color vision (often as defined by the standard observer) that produces a reduced gamut. Mechanisms for color blindness are related to the functionality of cone cells, and often to the expression of photopsins, the photopigments that 'catch' photons and thereby convert light into chemical signals. Color vision deficiencies can be classified as inherited or acquired. Inherited: inherited or congenital/genetic color vision deficiencies are most commonly caused by mutations of the genes encoding opsin proteins. However, several other genes can also lead to less common and/or more severe forms of color blindness. Acquired: color blindness that is not present at birth, may be caused by chronic illness, accidents, medication, chemical exposure or simply normal aging processes. Genetics Color blindness is typically an inherited genetic disorder. The most common forms of color blindness are associated with the Photopsin genes, but the mapping of the human genome has shown there are many causative mutations that do not directly affect the opsins. Mutations capable of causing color blindness originate from at least 19 different chromosomes and 56 different genes (as shown online at the Online Mendelian Inheritance in Man [OMIM]). Genetics of red–green color blindness By far the most common form of color blindness is congenital red–green color blindness (Daltonism), which includes protanopia/protanomaly and deuteranopia/deuteranomaly. These conditions are mediated by the OPN1LW and OPN1MW genes, respectively, both on the X chromosome. An 'affected' gene is either missing (as in Protanopia and Deuteranopia - Dichromacy) or is a chimeric gene (as in Protanomaly and Deuteranomaly). Since the OPN1LW and OPN1MW genes are on the X chromosome, they are sex-linked, and therefore affect males and females disproportionately. Because the color blind 'affected' alleles are recessive, color blindness specifically follows X-linked recessive inheritance. Males have only one X chromosome (XY), and females have two (XX); Because the male only has one of each gene, if it is affected, the male will be color blind. Because a female has two alleles of each gene (one on each chromosome), if only one gene is affected, the dominant normal alleles will "override" the affected, recessive allele and the female will have normal color vision. However, if the female has two mutated alleles, she will still be color blind. This is why there is a disproportionate prevalence of color blindness, with ~8% of males exhibiting color blindness and ~0.5% of females. Genetics of blue–yellow color blindness Congenital blue–yellow color blindness is a much rarer form of color blindness including tritanopia/tritanomaly. These conditions are mediated by the OPN1SW gene on Chromosome 7 which encodes the S-opsin protein and follows autosomal dominant inheritance. The cause of blue–yellow color blindness is not analogous to the cause of red–green color blindness, i.e. the peak sensitivity of the S-opsin does not shift to longer wavelengths. Rather, there are 6 known point mutations of OPN1SW that degrade the performance of the S-cones. The OPN1SW gene is almost invariant in the human population. Congenital tritan defects are often progressive, with nearly normal trichromatic vision in childhood (e.g. mild tritanomaly) progressing to dichromacy (tritanopia) as the S-cones slowly die. Tritanomaly and tritanopia are therefore different penetrance of the same disease, and some sources have argued that tritanomaly therefore be referred to as incomplete tritanopia. Other genetic causes Several inherited diseases are known to cause color blindness, including achromatopsia, cone dystrophy, Leber's congenital amaurosis and retinitis pigmentosa. These can be congenital or commence in childhood or adulthood. They can be static/stationary or progressive. Progressive diseases often involve deterioration of the retina and other parts of the eye, so often progress from color blindness to more severe visual impairments, up to and including total blindness. Non-genetic causes Physical trauma can cause color blindness, either neurologically – brain trauma which produces swelling of the brain in the occipital lobe – or retinally, either acute (e.g. from laser exposure) or chronic (e.g. from ultraviolet light exposure). Color blindness may also present itself as a symptom of degenerative diseases of the eye, such as cataract and age-related macular degeneration, and as part of the retinal damage caused by diabetes. Vitamin A deficiency may also cause color blindness. Color blindness may be a side effect of prescription drug use. For example, red–green color blindness can be caused by ethambutol, a drug used in the treatment of tuberculosis. Blue–yellow color blindness can be caused by sildenafil, an active component of Viagra. Hydroxychloroquine can also lead to hydroxychloroquine retinopathy, which includes various color defects. Exposure to chemicals such as styrene or organic solvents can also lead to color vision defects. Simple colored filters can also create mild color vision deficiencies. John Dalton's original hypothesis for his deuteranopia was actually that the vitreous humor of his eye was discolored: An autopsy of his eye after his death in 1844 showed this to be definitively untrue, though other filters are possible. Actual physiological examples usually affect the blue–yellow opponent channel and are named Cyanopsia and Xanthopsia, and are most typically an effect of yellowing or removal of the lens. The opponent channels can also be affected by the prevalence of certain cones in the retinal mosaic. The cones are not equally prevalent and not evenly distributed in the retina. When the number of one of these cone types is significantly reduced, this can also lead to or contribute to a color vision deficiency. This is one of the causes of tritanomaly. Some people are also unable to distinct between blue and green, which appears to be a combination of culture and exposure to UV-light. Diagnosis Color vision test The main method for diagnosing a color vision deficiency is in testing the color vision directly. The Ishihara color test is the test most often used to detect red–green deficiencies and most often recognized by the public. Some tests are clinical in nature, designed to be fast, simple, and effective at identifying broad categories of color blindness. Others focus on precision and are generally available only in academic settings. Pseudoisochromatic plates, a classification which includes the Ishihara color test and HRR test, embed a figure in the plate as a number of spots surrounded by spots of a slightly different color. These colors must appear identical (metameric) to the color blind but distinguishable to color normals. Pseudoisochromatic plates are used as screening tools because they are cheap, fast, and simple, but they do not provide precise diagnosis of CVD. Lanterns, such as the Farnsworth Lantern Test, project small colored lights to a subject, who is required to identify the color of the lights. The colors are those of typical signal lights, i.e. red, green, and yellow, which also happen to be colors of confusion of red–green CVD. Lanterns do not diagnose color blindness, but they are occupational screening tests to ensure an applicant has sufficient color discrimination to be able to perform a job. Arrangement tests can be used as screening or diagnostic tools. The Farnsworth–Munsell 100 hue test is very sensitive, but the Farnsworth D-15 is a simplified version used specifically for screening for CVD. In either case, the subject is asked to arrange a set of colored caps or chips to form a gradual transition of color between two anchor caps. Anomaloscopes are typically designed to detect red–green deficiencies and are based on the Rayleigh match, which compares a mixture of red and green light in variable proportions to a fixed spectral yellow of variable luminosity. The subject must change the two variables until the colors appear to match. They are expensive and require expertise to administer, so they are generally only used in academic settings. Genetic testing While genetic testing cannot directly evaluate a subject's color vision (phenotype), most congenital color vision deficiencies are well-correlated with genotype. Therefore, the genotype can be directly evaluated and used to predict the phenotype. This is especially useful for progressive forms that do not have a strongly color deficient phenotype at a young age. However, it can also be used to sequence the L- and M-Opsins on the X-chromosome, since the most common alleles of these two genes are known and have even been related to exact spectral sensitivities and peak wavelengths. A subject's color vision can therefore be classified through genetic testing, but this is just a prediction of the phenotype, since color vision can be affected by countless non-genetic factors such as your cone mosaic. Management Despite much recent improvement in gene therapy for color blindness, there is currently no FDA approved treatment for any form of CVD, and otherwise no cure for CVD currently exists. Management of the condition by using lenses to alleviate symptoms or smartphone apps to aid with daily tasks is possible. Lenses There are three kinds of lenses that an individual can wear that can increase their accuracy in some color related tasks (although none of these will "fix" color blindness or grant the wearer normal color vision): A red-tint contact lens worn over the non-dominant eye will leverage binocular disparity to improve discrimination of some colors. However, it can make other colors more difficult to distinguish. A 1981 review of various studies to evaluate the effect of the X-chrom (one brand) contact lens concluded that, while the lens may allow the wearer to achieve a better score on certain color vision tests, it did not correct color vision in the natural environment. A case history using the X-Chrom lens for a rod monochromat is reported and an X-Chrom manual is online. Tinted glasses (e.g. Pilestone/Colorlite glasses) apply a tint (e.g. magenta) to incoming light that can distort colors in a way that makes some color tasks easier to complete. These glasses can circumvent many color vision tests, though this is typically not allowed. Glasses with a notch filter (e.g. EnChroma glasses) filter a narrow band of light that excites both the L and M cones (yellow–green wavelengths). When combined with an additional stopband in the short wavelength (blue) region, these lenses may constitute a neutral-density filter (have no color tint). They improve on the other lens types by causing less distortion of colors and will essentially increase the saturation of some colors. They will only work on trichromats (anomalous or normal), and unlike the other types, do not have a significant effect on Dichromats. The glasses do not significantly increase one's ability on color blind tests. Aids Many mobile and computer applications have been developed to aid color blind individuals in completing color tasks: Some applications (e.g. color pickers) can identify the name (or coordinates within a color space) of a color on screen or the color of an object by using the device's camera. Some applications will make images easier to interpret by the color blind by enhancing color contrast in natural images and/or information graphics. These methods are generally called daltonization algorithms. Some applications can simulate color blindness by applying a filter to an image or screen that reduces the gamut of an image to that of a specific type of color blindness. While they do not directly help color blind people, they allow those with normal color vision to understand how the color blind see the world. Their use can help improve inclusive design by allowing designers to simulate their own images to ensure they are accessible to the color blind. In 2003, a cybernetic device called eyeborg was developed to allow the wearer to hear sounds representing different colors. Achromatopsic artist Neil Harbisson was the first to use such a device in early 2004; the eyeborg allowed him to start painting in color by memorizing the sound corresponding to each color. In 2012, at a TED Conference, Harbisson explained how he could now perceive colors outside the ability of human vision. Epidemiology Color blindness affects a large number of individuals, with protans and deutans being the most common types. In individuals with Northern European ancestry, as many as 8 percent of men and 0.4 percent of women experience congenital color deficiency. Interestingly, even Dalton's first paper already arrived upon this 8% number: History During the 17th and 18th century, several philosophers hypothesized that not all individuals perceived colors in the same way: Gordon Lynn Walls claims that the first well-circulated case study of color blindness was published in a 1777 letter from Joseph Huddart to Joseph Priestley, which described "Harris the Shoemaker" and several of his brothers with what would later be described as protanopia. There appear to be no earlier surviving historical mentions of color blindness, despite its prevalence. The phenomenon only came to be scientifically studied in 1794, when English chemist John Dalton gave the first account of color blindness in a paper to the Manchester Literary and Philosophical Society, which was published in 1798 as Extraordinary Facts relating to the Vision of Colours: With Observations. Genetic analysis of Dalton's preserved eyeball confirmed him as having deuteranopia in 1995, some 150 years after his death. Influenced by Dalton, German writer J. W. von Goethe studied color vision abnormalities in 1798 by asking two young subjects to match pairs of colors. In 1837, August Seebeck first discriminated between protans and deutans (then as class I + II). He was also the first to develop an objective test method, where subjects sorted colored sheets of paper, and was the first to describe a female colorblind subject. In 1875, the Lagerlunda train crash in Sweden brought color blindness to the forefront. Following the crash, Professor Alarik Frithiof Holmgren, a physiologist, investigated and concluded that the color blindness of the engineer (who had died) had caused the crash. Professor Holmgren then created the first test for color vision using multicolored skeins of wool to detect color blindness and thereby exclude the color blind from jobs in the transportation industry requiring color vision to interpret safety signals. However, there is a claim that there is no firm evidence that color deficiency did cause the collision, or that it might have not been the sole cause. In 1920, Frederick William Edridge-Green devised an alternative theory of color vision and color blindness based on Newton's classification of 7 fundamental colors (ROYGBIV). Edridge-Green classified color vision based on how many distinct colors a subject could see in the spectrum. Normal subjects were termed hexachromic as they could not discern Indigo. Subjects with superior color vision, who could discern indigo, were heptachromic. The color blind were therefore dichromic (equivalent to dichromacy) or tri-, tetra- or pentachromic (anomalous trichromacy). Rights In the United States, under federal anti-discrimination laws such as the Americans with Disabilities Act, color vision deficiencies have not been found to constitute a disability that triggers protection from workplace discrimination. A Brazilian court ruled that the color blind are protected by the Inter-American Convention on the Elimination of All Forms of Discrimination against Person with Disabilities. At trial, it was decided that the carriers of color blindness have a right of access to wider knowledge, or the full enjoyment of their human condition. Occupations Color blindness may make it difficult or impossible for a person to engage in certain activities. Persons with color blindness may be legally or practically barred from occupations in which color perception is an essential part of the job (e.g., mixing paint colors), or in which color perception is important for safety (e.g., operating vehicles in response to color-coded signals). This occupational safety principle originates from the aftermath of the 1875 Lagerlunda train crash, which Alarik Frithiof Holmgren blamed on the color blindness of the engineer and created the first occupational screening test (Holmgren's wool test) against the color blind. Color vision is important for occupations using telephone or computer networking cabling, as the individual wires inside the cables are color-coded using green, orange, brown, blue and white colors. Electronic wiring, transformers, resistors, and capacitors are color-coded as well, using black, brown, red, orange, yellow, green, blue, violet, gray, white, silver, and gold. Participation, officiating and viewing sporting events can be impacted by color blindness. Professional football players Thomas Delaney and Fabio Carvalho have discussed the difficulties when color clashes occur, and research undertaken by FIFA has shown that enjoyment and player progression can be hampered by issues distinguishing the difference between the pitch and training objects or field markings. Snooker World Champions Mark Williams and Peter Ebdon sometimes need to ask the referee for help distinguishing between the red and brown balls due to their color blindness. Both have played foul shots on notable occasions by the wrong ball. Driving Red–green color blindness can make it difficult to drive, primarily due to the inability to differentiate red–amber–green traffic lights. Protans are further disadvantaged due to the darkened perception of reds, which can make it more difficult to quickly recognize brake lights. In response, some countries have refused to grant driver's licenses to individuals with color blindness: In April 2003, Romania removed color blindness from its list of disqualifying conditions for learner driver's licenses. It is now qualified as a condition that could potentially compromise driver safety, therefore a driver may have to be evaluated by an authorized ophthalmologist to determine if they can drive safely. As of May 2008, there is an ongoing campaign to remove the legal restrictions that prohibit color blind citizens from getting driver's licenses. In June 2020, India relaxed its ban on driver's licenses for the color blind to now only apply to those with strong CVD. While previously restricted, those who test as mild or moderate can now pass the medical requirements. Australia instituted a tiered ban on the color blind from obtaining commercial driver's licenses in 1994. This included a ban for all protans, and a stipulation that deutans must pass the Farnsworth Lantern. The stipulation on deutans was revoked in 1997 citing a lack of available test facilities, and the ban on protans was revoked in 2003. All color blind individuals are banned from obtaining a driver's license in China and since 2016 in Russia (2012 for dichromats). Piloting aircraft Although many aspects of aviation depend on color coding, only a few of them are critical enough to be interfered with by some milder types of color blindness. Some examples include color-gun signaling of aircraft that have lost radio communication, color-coded glide-path indications on runways, and the like. Some jurisdictions restrict the issuance of pilot credentials to persons with color blindness for this reason. Restrictions may be partial, allowing color-blind persons to obtain certification but with restrictions, or total, in which case color-blind persons are not permitted to obtain piloting credentials at all. In the United States, the Federal Aviation Administration requires that pilots be tested for normal color vision as part of their medical clearance in order to obtain the required medical certificate, a prerequisite to obtaining a pilot's certification. If testing reveals color blindness, the applicant may be issued a license with restrictions, such as no night flying and no flying by color signals—such a restriction effectively prevents a pilot from holding certain flying occupations, such as that of an airline pilot, although commercial pilot certification is still possible, and there are a few flying occupations that do not require night flight and thus are still available to those with restrictions due to color blindness (e.g., agricultural aviation). The government allows several types of tests, including medical standard tests (e.g., the Ishihara, Dvorine, and others) and specialized tests oriented specifically to the needs of aviation. If an applicant fails the standard tests, they will receive a restriction on their medical certificate that states: "Not valid for night flying or by color signal control". They may apply to the FAA to take a specialized test, administered by the FAA. Typically, this test is the "color vision light gun test". For this test an FAA inspector will meet the pilot at an airport with an operating control tower. The color signal light gun will be shone at the pilot from the tower, and they must identify the color. If they pass they may be issued a waiver, which states that the color vision test is no longer required during medical examinations. They will then receive a new medical certificate with the restriction removed. This was once a Statement of Demonstrated Ability (SODA), but the SODA was dropped, and converted to a simple waiver (letter) early in the 2000s. Research published in 2009 carried out by the City University of London's Applied Vision Research Centre, sponsored by the UK's Civil Aviation Authority and the U.S. Federal Aviation Administration, has established a more accurate assessment of color deficiencies in pilot applicants' red/green and yellow–blue color range which could lead to a 35% reduction in the number of prospective pilots who fail to meet the minimum medical threshold.
Biology and health sciences
Disability
null
7398
https://en.wikipedia.org/wiki/Computer%20security
Computer security
Computer security (also cybersecurity, digital security, or information technology (IT) security) is the protection of computer software, systems and networks from threats that can lead to unauthorized information disclosure, theft or damage to hardware, software, or data, as well as from the disruption or misdirection of the services they provide. The significance of the field stems from the expanded reliance on computer systems, the Internet, and wireless network standards. Its importance is further amplified by the growth of smart devices, including smartphones, televisions, and the various devices that constitute the Internet of things (IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity of information systems and the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such as power distribution, elections, and finance. Although many aspects of computer security involve digital security, such as electronic passwords and encryption, physical security measures such as metal locks are still used to prevent unauthorized tampering. IT security is not a perfect subset of information security, therefore does not completely align into the security convergence schema. Vulnerabilities and attacks A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in the Common Vulnerabilities and Exposures (CVE) database. An exploitable vulnerability is one for which at least one working attack or exploit exists. Actors maliciously seeking vulnerabilities are known as threats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited using automated tools or customized scripts. Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others. In April 2023, the United Kingdom Department for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months. They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)." Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade, small and midsize businesses (SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business." SMBs are most likely to be affected by malware, ransomware, phishing, man-in-the-middle attacks, and Denial-of Service (DoS) Attacks. Normal internet users are most likely to be affected by untargeted cyberattacks. These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly include phishing, ransomware, water holing and scanning. To secure a computer system, it is important to understand the attacks that can be made against it, and these threats can typically be classified into one of the following categories: Backdoor A backdoor in a computer system, a cryptosystem, or an algorithm is any secret method of bypassing normal authentication or security controls. These weaknesses may exist for many reasons, including original design or poor configuration. Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals. Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons. Criminals often use malware to install backdoors, giving them remote administrative access to a system. Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer." Backdoors can be very hard to detect and are usually discovered by someone who has access to the application source code or intimate knowledge of the operating system of the computer. Denial-of-service attack Denial-of-service attacks (DoS) are designed to make a machine or network resource unavailable to its intended users. Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a single IP address can be blocked by adding a new firewall rule, many forms of distributed denial-of-service (DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from the zombie computers of a botnet or from a range of other possible techniques, including distributed reflective denial-of-service (DRDoS), where innocent systems are fooled into sending traffic to the victim. With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section. Physical access attacks A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information. Attackers may also compromise security by making operating system modifications, installing software worms, keyloggers, covert listening devices or using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from a CD-ROM or other bootable media. Disk encryption and the Trusted Platform Module standard are designed to prevent these attacks. Direct service attackers are related in concept to direct memory attacks which allow an attacker to gain direct access to a computer's memory. The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly." Eavesdropping Eavesdropping is the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited. Data transmitted across an open network allows an attacker to exploit a vulnerability and intercept it via various methods. Unlike malware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice. In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time." Using a virtual private network (VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as using HTTPS instead of an unencrypted HTTP. Programs such as Carnivore and NarusInSight have been used by the Federal Bureau of Investigation (FBI) and NSA to eavesdrop on the systems of internet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faint electromagnetic transmissions generated by the hardware. TEMPEST is a specification by the NSA referring to these attacks. Malware Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users." Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently. Another type of malware is ransomware, which is when "malware installs itself onto a victim's machine, encrypts their files, and then turns around and demands a ransom (usually in Bitcoin) to return that data to the user." Types of malware include some of the following: Viruses are a specific type of malware, and are normally a malicious code that hijacks software with the intention to "do damage and spread copies of itself." Copies are made with the aim to spread to other programs on a computer. Worms are similar to viruses, however viruses can only function when a user runs (opens) a compromised program. Worms are self-replicating malware that spread between programs, apps and devices without the need for human interaction. Trojan horses are programs that pretend to be helpful or hide themselves within desired or legitimate software to "trick users into installing them." Once installed, a RAT (remote access trojan) can create a secret backdoor on the affected device to cause damage. Spyware is a type of malware that secretly gathers information from an infected computer and transmits the sensitive information back to the attacker. One of the most common forms of spyware are keyloggers, which record all of a user's keyboard inputs/keystrokes, to "allow hackers to harvest usernames, passwords, bank account and credit card numbers." Scareware, as the name suggests, is a form of malware which uses social engineering (manipulation) to scare, shock, trigger anxiety, or suggest the perception of a threat in order to manipulate users into buying or installing unwanted software. These attacks often begin with a "sudden pop-up with an urgent message, usually warning the user that they've broken the law or their device has a virus." Man-in-the-middle attacks Man-in-the-middle attacks (MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between. Types of MITM attacks include: IP address spoofing is where the attacker hijacks routing protocols to reroute the targets traffic to a vulnerable network node for traffic interception or injection. Message spoofing (via email, SMS or OTT messaging) is where the attacker spoofs the identity or carrier service while the target is using messaging protocols like email, SMS or OTT (IP-based) messaging apps. The attacker can then monitor conversations, launch social attacks or trigger zero-day-vulnerabilities to allow for further attacks. WiFi SSID spoofing is where the attacker simulates a WIFI base station SSID to capture and modify internet traffic and transactions. The attacker can also use local network addressing and reduced network defenses to penetrate the target's firewall by breaching known vulnerabilities. Sometimes known as a Pineapple attack thanks to a popular device.
Technology
Basics_3
null
7424
https://en.wikipedia.org/wiki/Crochet
Crochet
Crochet (; ) is a process of creating textiles by using a crochet hook to interlock loops of yarn, thread, or strands of other materials. The name is derived from the French term crochet, which means 'hook'. Hooks can be made from different materials (aluminum, steel, metal, wood, bamboo, bone, etc.), sizes, and types (in-line, tapered, ergonomic, etc.). The key difference between crochet and knitting, beyond the implements used for their production, is that each stitch in crochet is completed before you begin the next one, while knitting keeps many stitches open at a time. Some variant forms of crochet, such as Tunisian crochet and Broomstick lace, do keep multiple crochet stitches open at a time. Etymology The word crochet is derived from the French word , a diminutive of croche, in turn from the Germanic croc, both meaning "hook". It was used in 17th-century French lace-making, where the term Crochetage designated a stitch used to join separate pieces of lace. The word crochet subsequently came to describe both the specific type of textile, and the hooked needle used to produce it. In 1567, the tailor of Mary, Queen of Scots, Jehan de Compiegne, supplied her with silk thread for sewing and crochet, "soye à coudre et crochetz". Origins Knitted textiles survive from as early as the 11th century CE, but the first substantive evidence of crocheted fabric emerges in Europe during the 19th century. Earlier work identified as crochet was commonly made by nålebinding, a different looped yarn technique. The first known published instructions for crochet explicitly using that term to describe the craft in its present sense appeared in the Dutch magazine Penélopé in 1823. This includes a colour plate showing five styles of purse, of which three were intended to be crocheted with silk thread. The first is "simple open crochet" (crochet simple ajour), a mesh of chain-stitch arches. The second (illustrated here) starts in a semi-open form (demi jour), where chain-stitch arches alternate with equally long segments of slip-stitch crochet, and closes with a star made with "double-crochet stitches" (dubbelde hekelsteek: double-crochet in British terminology; single-crochet in US). The third purse is made entirely in double-crochet. The instructions prescribe the use of a tambour needle (as illustrated below) and introduce a number of decorative techniques. The earliest dated reference in English to garments made of cloth produced by looping yarn with a hook—shepherd's knitting—is in The Memoirs of a Highland Lady by Elizabeth Grant (1797–1830). The journal entry, itself, is dated 1812 but was not recorded in its subsequently published form until some time between 1845 and 1867, and the actual date of publication was first in 1898. Nonetheless, the 1833 volume of Penélopé describes and illustrates a shepherd's hook, and recommends its use for crochet with coarser yarn. In 1844, one of the numerous books discussing crochet that began to appear in the 1840s states: Two years later, the same author writes: An instruction book from 1846 describes Shepherd or single crochet as what in current international terminology is either called single crochet or slip-stitch crochet, with U.S. terminology always using the latter (reserving single crochet for use as noted above). It similarly equates "Double" and "French crochet". Notwithstanding the categorical assertion of a purely British origin, there is solid evidence of a connection between French tambour embroidery, french passementerie and crochet. A form of hook known as crochet was used to create 'chains in the air' as part of passementerie back in the 17th century. This is confirmed by a patent issued to the passementiers by Louis XIV in 1653, and there are earlier decorative examples of this technique. The patent lists various items, including "thread for embroidery, enhanced and embellished as done with a needle, on thimbles, on the fingers, on a crochet, and on a bobbin." Similarly, chain stitch appears in Queen Elizabeth I's wardrobe accounts, starting in 1558, with further references to garments bordered with 'cheyne lace' in other inventories. One example from 1588 describes "a long cloak of murry velvet, with a border of small cheyne lace of Venice silver." While the exact design of the 1653 crochet is unclear, a 1723 French dictionary by Jacques Savary des Brûlons describes a crochet as a small iron instrument, three or four inches long, with a pointed, curved end and a wooden handle, used by passementiers for tasks like creating hat seams and attaching flowers to mesh. It's most likely that the hook used in crochet came from the ones used by the french pessamenterie industry. French tambour embroidery and the crochet needle used for it was illustrated in detail in 1763 in Diderot's Encyclopedia. The tip of the needle shown there is indistinguishable from that of a present-day inline crochet hook and the chain stitch separated from a cloth support is a fundamental element of the latter technique. The 1823 Penélopé instructions unequivocally state that the tambour tool was used for crochet and the first of the 1840s instruction books uses the terms tambour and crochet as synonyms. This equivalence is retained in the 4th edition of that work, 1847. The strong taper of the shepherd's hook eases the production of slip-stitch crochet but is less amenable to stitches that require multiple loops on the hook at the same time. Early yarn hooks were also continuously tapered but gradually enough to accommodate multiple loops. The design with a cylindrical shaft that is commonplace today was largely reserved for tambour-style steel needles. Both types gradually merged into the modern form that appeared toward the end of the 19th century, including both tapered and cylindrical segments, and the continuously tapered bone hook remained in industrial production until World War II. The early instruction books make frequent reference to the alternative use of 'ivory, bone, or wooden hooks' and 'steel needles in a handle', as appropriate to the stitch being made. Taken with the synonymous labeling of shepherd's- and single crochet, and the similar equivalence of French- and double crochet, there is a strong suggestion that crochet is rooted both in tambour embroidery and shepherd's knitting, leading to thread and yarn crochet respectively; a distinction that is still made. The locus of the fusion of all these elements—the "invention" noted above—has yet to be determined, as does the origin of shepherd's knitting. Shepherd's hooks are still being made for local slip-stitch crochet traditions. The form in the accompanying photograph is typical for contemporary production. A longer continuously tapering design intermediate between it and the 19th-century tapered hook was also in earlier production, commonly being made from the handles of forks and spoons. Irish crochet In the 19th century, as Ireland was facing the Great Irish Famine (1845–1849), crochet lace work was introduced as a form of famine relief (the production of crocheted lace being an alternative way of making money for impoverished Irish workers). Men, women, children joined a co-operative in order to crochet and produce products to help with famine relief during the Great Irish Famine. Schools to teach crocheting were started. Teachers were trained and sent across Ireland to teach this craft. When the Irish immigrated to the Americas, they were able to take with them crocheting. Mademoiselle Riego de la Branchardiere is generally credited with the invention of Irish Crochet, publishing the first book of patterns in 1846. Irish lace became popular in Europe and America, and was made in quantity until the first World War. Modern practice and culture Fashions in crochet changed with the end of the Victorian era in the 1890s. Crocheted laces in the new Edwardian era, peaking between 1910 and 1920, became even more elaborate in texture and complicated stitching. The strong Victorian colors disappeared, though, and new publications called for white or pale threads, except for fancy purses, which were often crocheted of brightly colored silk and elaborately beaded. After World War I, far fewer crochet patterns were published, and most of them were simplified versions of the early 20th-century patterns. After World War II, from the late 1940s until the early 1960s, there was a resurgence in interest in home crafts, particularly in the United States, with many new and imaginative crochet designs published for colorful doilies, potholders, and other home items, along with updates of earlier publications. These patterns called for thicker threads and yarns than in earlier patterns and included variegated colors. The craft remained primarily a homemaker's art until the late 1960s and early 1970s, when the new generation picked up on crochet and popularized granny squares, a motif worked in the round and incorporating bright colors. Although crochet underwent a subsequent decline in popularity, the early 21st century has seen a revival of interest in handcrafts and DIY, as well as improvement of the quality and varieties of yarn. As well as books and classes, there are YouTube tutorials and TikTok videos to help people who may need a clearer explanation to learn how to crochet. Filet crochet, Tunisian crochet, tapestry crochet, broomstick lace, hairpin lace, cro-hooking, and Irish crochet are all variants of the basic crochet method. Crochet has experienced a revival on the catwalk as well. Christopher Kane's Fall 2011 Ready-to-Wear collection makes intensive use of the granny square, one of the most basic of crochet motifs. Websites such as Etsy and Ravelry have made it easier for individual hobbyists to sell and distribute their patterns or projects across the internet. Creating crocheted items has become a way to make sustainable fashion. Fast fashion brands like Shein have created products that resemble crocheted items. Materials Basic materials required for crochet are hook, scissors (to cut yarn) and some type of material that will be crocheted, most commonly used are yarn or thread. Alternatively, some choose to crochet with their hands, especially for large yarns. Yarn, one of the most commonly used materials for crocheting, has varying weights which need to be taken into consideration when following patterns. The weight of the yarn can affect not only the look of the product but also the feeling. Acrylic can also be used when crocheting, as it is synthetic and an alternative for wool. Additional tools are convenient for making related accessories. Examples of such tools include cardboard cutouts, which can be used to make tassels, fringe, and many other items; a pom-pom circle, used to make pom-poms; a tape measure and a gauge measure, both used for measuring crocheted work and counting stitches; a row counter; and occasionally plastic rings, which are used for special projects. In recent years, yarn selections have moved beyond synthetic and plant and animal-based fibers to include bamboo, qiviut, hemp, and banana stalks, to name a few. Many advanced crocheters have also incorporated recycled materials into their work in an effort to "go green" and experiment with new textures by using items such as plastic bags, old t-shirts or sheets, VCR or Cassette tape, and ribbon. Crochet hook The crochet hook comes in many sizes and materials. Because sizing is categorized by the diameter of the hook's shaft, a crafter aims to create stitches of a certain size in order to reach a particular gauge specified in a given pattern. If gauge is not reached with one hook, another is used until the stitches made are the needed size. Crafters may have a preference for one type of hook material over another due to aesthetic appeal, yarn glide, or hand disorders such as arthritis, where bamboo or wood hooks are favored over metal for the perceived warmth and flexibility during use. Hook grips and ergonomic hook handles are also available to assist crafters. Aluminum, bamboo, and plastic crochet hooks are available from 2.25 to 30 millimeters in size, or from B-1 to T/X in American sizing. Artisan-made hooks are often made of hand-turned woods, sometimes decorated with semi-precious stones or beads. Steel crochet hooks are sized in a reverse manner – the higher the number, the smaller the hook. They range in size from 0.9 to 2.7 millimeters, or from 14 to 00 in American sizing. These hooks are used for fine crochet work such as doilies and lace. Crochet hooks used for Tunisian crochet are elongated and have a stopper at the end of the handle, while double-ended crochet hooks have a hook on both ends of the handle. Tunisian crochet hooks are shaped without a fat thumb grip and thus can hold many loops on the hook at a time without stretching some to different heights than others (Solovan). There is also a double hooked tool called a Cro-hook. While this is not in itself a hook, it is a device used in conjunction with a crochet hook to produce stitches. Yarn Yarn for crochet is usually sold as balls, or skeins (hanks), although it may also be wound on spools or cones. Skeins and balls are generally sold with a yarn band, a label that describes the yarn's weight, length, dye lot, fiber content, washing instructions, suggested needle size, likely gauge, etc. It is a common practice to save the yarn band for future reference, especially if additional skeins must be purchased. Crocheters generally ensure that the yarn for a project comes from a single dye lot. The dye lot specifies a group of skeins that were dyed together and thus have precisely the same color; skeins from different dye lots, even if very similar in color, are usually slightly different and may produce a visible stripe when added onto existing work. If insufficient yarn of a single dye lot is bought to complete a project, additional skeins of the same dye lot can sometimes be obtained from other yarn stores or online. The thickness or weight of the yarn is a significant factor in determining how many stitches and rows are required to cover a given area for a given stitch pattern. This is also termed the gauge. Thicker yarns generally require large-diameter crochet hooks, whereas thinner yarns may be crocheted with thick or thin hooks. Hence, thicker yarns generally require fewer stitches, and therefore less time, to work up a given project. The recommended gauge for a given ball of yarn can be found on the label that surrounds the skein when buying in stores. Patterns and motifs are coarser with thicker yarns and produce bold visual effects, whereas thinner yarns are best for refined or delicate pattern-work. Yarns are standardly grouped by thickness into six categories: superfine, fine, light, medium, bulky and superbulky. Quantitatively, thickness is measured by the number of wraps per inch (WPI). The related weight per unit length is usually measured in tex or denier. Before use, hanks are wound into balls in which the yarn emerges from the center, making crocheting easier by preventing the yarn from becoming easily tangled. The winding process may be performed by hand or done with a ball winder and swift. A yarn's usefulness is judged by several factors, such as its loft (its ability to trap air), its resilience (elasticity under tension), its washability and colorfastness, its hand (its feel, particularly softness vs. scratchiness), its durability against abrasion, its resistance to pilling, its hairiness (fuzziness), its tendency to twist or untwist, its overall weight and drape, its blocking and felting qualities, its comfort (breathability, moisture absorption, wicking properties) and its appearance, which includes its color, sheen, smoothness and ornamental features. Other factors include allergenicity, speed of drying, resistance to chemicals, moths, and mildew, melting point and flammability, retention of static electricity, and the propensity to accept dyes. Desirable properties may vary for different projects, so there is no one "best" yarn. Although crochet may be done with ribbons, metal wire or more exotic filaments, most yarns are made by spinning fibers. In spinning, the fibers are twisted so that the yarn resists breaking under tension; the twisting may be done in either direction, resulting in a Z-twist or S-twist yarn. If the fibers are first aligned by combing them and the spinner uses a worsted type drafting method such as the short forward draw, the yarn is smoother and called a worsted; by contrast, if the fibers are carded but not combed and the spinner uses a woolen drafting method such as the long backward draw, the yarn is fuzzier and called woolen-spun. The fibers making up a yarn may be continuous filament fibers such as silk and many synthetics, or they may be staples (fibers of an average length, typically a few inches); naturally filament fibers are sometimes cut up into staples before spinning. The strength of the spun yarn against breaking is determined by the amount of twist, the length of the fibers and the thickness of the yarn. In general, yarns become stronger with more twist (also called worst), longer fibers and thicker yarns (more fibers); for example, thinner yarns require more twist than do thicker yarns to resist breaking under tension. The thickness of the yarn may vary along its length; a slub is a much thicker section in which a mass of fibers is incorporated into the yarn. The spun fibers are generally divided into animal fibers, plant and synthetic fibers. These fiber types are chemically different, corresponding to proteins, carbohydrates and synthetic polymers, respectively. Animal fibers include silk, but generally are long hairs of animals such as sheep (wool), goat (angora, or cashmere goat), rabbit (angora), llama, alpaca, dog, cat, camel, yak, and muskox (qiviut). Plants used for fibers include cotton, flax (for linen), bamboo, ramie, hemp, jute, nettle, raffia, yucca, coconut husk, banana trees, soy and corn. Rayon and acetate fibers are also produced from cellulose mainly derived from trees. Common synthetic fibers include acrylics, polyesters such as dacron and ingeo, nylon and other polyamides, and olefins such as polypropylene. Of these types, wool is generally favored for crochet, chiefly owing to its superior elasticity, warmth and (sometimes) felting; however, wool is generally less convenient to clean and some people are allergic to it. It is also common to blend different fibers in the yarn, e.g., 85% alpaca and 15% silk. Even within a type of fiber, there can be great variety in the length and thickness of the fibers; for example, Merino wool and Egyptian cotton are favored because they produce exceptionally long, thin (fine) fibers for their type. A single spun yarn may be crochet as is, or braided or plied with another. In plying, two or more yarns are spun together, almost always in the opposite sense from which they were spun individually; for example, two Z-twist yarns are usually plied with an S-twist. The opposing twist relieves some of the yarns' tendency to curl up and produces a thicker, balanced yarn. Plied yarns may themselves be plied together, producing cabled yarns or multi-stranded yarns. Sometimes, the yarns being plied are fed at different rates, so that one yarn loops around the other, as in bouclé. The single yarns may be dyed separately before plying, or afterwards to give the yarn a uniform look. The dyeing of yarns is a complex art. Yarns need not be dyed; or they may be dyed one color, or a great variety of colors. Dyeing may be done industrially, by hand or even hand-painted onto the yarn. A great variety of synthetic dyes have been developed since the synthesis of indigo dye in the mid-19th century; however, natural dyes are also possible, although they are generally less brilliant. The color-scheme of a yarn is sometimes called its colorway. Variegated yarns can produce interesting visual effects, such as diagonal stripes. Process Crocheted fabric is begun by placing a slip-knot loop on the hook (though other methods, such as a magic ring or simple folding over of the yarn may be used), pulling another loop through the first loop, and repeating this process to create a chain of a suitable length. The chain is either turned and worked in rows, or joined to the beginning of the row with a slip stitch and worked in rounds. Rounds can also be created by working many stitches into a single loop. Stitches are made by pulling one or more loops through each loop of the chain. At any one time at the end of a stitch, there is only one loop left on the hook. Tunisian crochet, however, draws all of the loops for an entire row onto a long hook before working them off one at a time. Like knitting, crochet can be worked either flat (back and forth in rows) or in the round (in spirals, such as when making tubular pieces). Types of stitches There are six main types of basic stitches (the following description uses international crochet terminology with US variants noted in backets). Chain stitch (ch) – the most basic of all stitches and used to begin most projects. Yarn round hook (yrh) and draw through. Slip stitch (sl st or ss) – used to join chain stitch to form a ring. Insert hook in work, yrh, draw through. Double crochet (dc) (US = single crochet) – Insert hook, draw loop through, (2 loops on hook, hence double), yrh, draw through. Half treble (htr) (US = half double) – yrh, insert hook, draw loop through, (3 loops on hook, hence treble), yrh, draw through all loops. Treble (tr) (US = double) – yrh, insert hook, draw loop through (3 loops on hook, hence treble), yrh, draw through 2 loops, yrh, draw through 2 loops. Double treble (US = treble or triple) – as treble but 2 yrh at start (hence double treble). Also triple treble (ttr), as treble but with 3 yrh at start, and so on. While the horizontal distance covered by these basic stitches is the same, they differ in height and can be replaced with a length of ch when required, e.g. 1 tr = 3 ch. The more advanced stitches are often combinations of these basic stitches, or are made by inserting the hook into the work in unusual locations. More advanced stitches include the shell stitch, V stitch, spike stitch, Afghan stitch, butterfly stitch, popcorn stitch, cluster stitch, and crocodile stitch. International crochet terms and notations There are two main notations of basic stitches, one used across Europe, Australia, India and other crocheting nations, the other in the US and Canada. (In America, international terminology is often erroneously called British or UK terminology.) Crochet is traditionally worked from a written pattern using standard abbreviations or from a diagram, thus enabling non-English speakers to use English-based patterns. To help counter confusion when reading patterns, a diagramming system using a standard international notation has come into use (illustration, left). In the United States, crochet terminology and sizing guidelines, as well as standards for yarn and hook labeling, are primarily regulated by the Craft Yarn Council. Another terminological difference is known as tension (international) and gauge (US). Individual crocheters work yarn with a loose or a tight hold and, if unmeasured, these differences can lead to significant size changes in finished garments that have the same number of stitches. In order to control for this inconsistency, printed crochet instructions include a standard for the number of stitches across a standard swatch of fabric. An individual crocheter begins work by producing a test swatch and compensating for any discrepancy by changing to a smaller or larger hook. Differences from and similarities to knitting One of the more obvious differences is that crochet uses one hook while much knitting uses two needles. In most crochet, the artisan usually has only one live stitch on the hook (with the exception being Tunisian crochet), while a knitter keeps an entire row of stitches active simultaneously. Dropped stitches, which can unravel a knitted fabric, rarely interfere with crochet work, due to a second structural difference between knitting and crochet. In knitting, each stitch is supported by the corresponding stitch in the row above and it supports the corresponding stitch in the row below, whereas crochet stitches are only supported by and support the stitches on either side of it. If a stitch in a finished crocheted item breaks, the stitches above and below remain intact, and because of the complex looping of each stitch, the stitches on either side are unlikely to come loose unless heavily stressed. Round or cylindrical patterns are simple to produce with a regular crochet hook, but cylindrical knitting requires either a set of circular needles or three to five special double-ended needles. Many crocheted items are composed of individual motifs which are then joined, either by sewing or crocheting, whereas knitting is usually composed of one fabric, such as entrelac. Freeform crochet is a technique that can create interesting shapes in three dimensions because new stitches can be made independently of previous stitches almost anywhere in the crocheted piece. It is generally accomplished by building shapes or structural elements onto existing crocheted fabric at any place the crafter desires. Knitting can be accomplished by machine, while many crochet stitches can only be crafted by hand. The height of knitted and crocheted stitches is also different: a single crochet stitch is twice the height of a knit stitch in the same yarn size and comparable diameter tools, and a double crochet stitch is about four times the height of a knit stitch. While most crochet is made with a hook, there is also a method of crocheting with a knitting loom. This is called loomchet. Slip stitch crochet is very similar to knitting. Each stitch in slip stitch crochet is formed the same way as a knit or purl stitch which is then bound off. A person working in slip stitch crochet can follow a knitted pattern with knits, purls, and cables, and get a similar result. It is a common perception that crochet produces a thicker fabric than knitting, tends to have less "give" than knitted fabric, and uses approximately a third more yarn for a comparable project than knitted items. Although this is true when comparing a single crochet swatch with a stockinette swatch, both made with the same size yarn and needle/hook, it is not necessarily true for crochet in general. Most crochet uses far less than 1/3 more yarn than knitting for comparable pieces, and a crocheter can get similar feel and drape to knitting by using a larger hook or thinner yarn. Tunisian crochet and slip stitch crochet can in some cases use less yarn than knitting for comparable pieces. According to sources claiming to have tested the 1/3 more yarn assertion, a single crochet stitch (sc) uses approximately the same amount of yarn as knit garter stitch, but more yarn than stockinette stitch. Any stitch using yarnovers uses less yarn than single crochet to produce the same amount of fabric. Cluster stitches, which are in fact multiple stitches worked together, will use the most length. Standard crochet stitches like sc and dc also produce a thicker fabric, more like knit garter stitch. This is part of why they use more yarn. Slip stitch can produce a fabric much like stockinette that is thinner and therefore uses less yarn. Any yarn can be either knitted or crocheted, provided needles or hooks of the correct size are used, but the cord's properties should be taken into account. For example, lofty, thick woolen yarns tend to function better when knitted, which does not crush their airy structure, while thin and tightly spun yarn helps to achieve the firm texture required for Amigurumi crochet. Charity and activism It has been very common for people and groups to crochet clothing and other garments and then donate them to soldiers during war. People have also crocheted clothing and then donated it to hospitals, for sick patients and also for newborn babies. Sometimes groups will crochet for a specific charity purpose, such as crocheting for homeless shelters, nursing homes, etc. It is becoming increasingly popular to crochet hats (commonly referred to as "chemo caps") and donate them to cancer treatment centers, for those undergoing chemotherapy and therefore losing hair. During October pink hats and scarves are made and proceeds are donated to breast cancer funds. Organizations dedicated to using crochet as a way to help others include Knots of Love, Crochet for Cancer, and Soldiers' Angels. These organizations offer warm useful items for people in need. In 2020, people around the world banded together to help save the wildlife affected by the Australian bushfires by crocheting kangaroo pouches, koala mittens and wildlife nests. This was an international effort to help during the particularly bad bushfire season which devastated local ecological systems. A group started in 2005 to create crochet versions of coral reefs grew by 2022 to over 20,000 contributors in what became the Crochet Coral Reef Project. To promote awareness of the effects of global warming, their creations have been displayed in galleries and museums by an estimated 2 million people. Many creations apply hyperbolic (curved) geometric shapes—distinguished from Euclidean (flat) geometry—to emulate natural structures. Extending hyperbolic crochet for activism and education with color, a group of South African crafters created The Abundance Crochet Coral Reef, an eco-art installation in Cape Town's Two Oceans Aquarium, to juxtapose hyperbolic shapes crocheted in variations of white on one side of a display with fiber coral shapes crocheted in various colors to illustrate coral bleaching due to oceanic warming and climate change. Feminist scholar-activists have argued for crochet as an embodied method of inquiry aimed at uncovering entangled, relational, and situated ways being and knowing inclusive of the more-than-human co-creation of worlds. In Staying with the Trouble, Donna Haraway argues for the methodological use of crochet to model ecological and mathematical phenomena as "a kind of lure to an affective cognitive ecology stitched in fiber arts" that works "not by mimicry, but by open-ended, exploratory process." Yarn bombing In recent years, a practice called yarn bombing, or the use of knitted or crocheted cloth to modify and beautify one's (usually outdoor) surroundings, emerged in the US and spread worldwide. Yarn bombers sometimes target existing pieces of graffiti for beautification. In 2010, an entity dubbed "the Midnight Knitter" hit West Cape May. Residents awoke to find knit cozies hugging tree branches and sign poles. In September 2015, Grace Brett was named "The World's Oldest Yarn Bomber". She is part of a group of yarn graffiti-artists called the Souter Stormers, who beautify their local town in Scotland. Mathematics and hyperbolic crochet Crochet has been used to illustrate shapes in hyperbolic space that are difficult to reproduce using other media or are difficult to understand when viewed two-dimensionally. Mathematician Daina Taimiņa first used crochet in 1997 to create strong, durable models of hyperbolic space after finding paper models were delicate and hard to create. These models enable one to turn, fold, and otherwise manipulate space to more fully grasp ideas such as how a line can appear curved in hyperbolic space yet actually be straight. Her work received an exhibition by the Institute For Figuring. Examples in nature of organisms that show hyperbolic structures include lettuces, sea slugs, flatworms and coral. Margaret Wertheim and Christine Wertheim of the Institute For Figuring created a traveling art installation of a coral reef using Taimina's method. Local artists are encouraged to create their own "satellite reefs" to be included alongside the original display. As hyperbolic and mathematics-based crochet has become more popular, there have been several events highlighting work from various fiber artists. Two shows were Sant Ocean Hall at the Smithsonian in Washington, D.C., and Sticks, Hooks, and the Mobius: Knit and Crochet Go Cerebral at Lafayette College in Pennsylvania. Architecture In Style in the technical arts, Gottfried Semper looks at the textile with great promise and historical precedent. In Section 53, he writes of the "loop stitch, or Noeud Coulant: a knot that, if untied, causes the whole system to unravel." In the same section, Semper confesses his ignorance of the subject of crochet but believes strongly that it is a technique of great value as a textile technique and possibly something more. There are a small number of architects currently interested in the subject of crochet as it relates to architecture. The following publications, explorations and thesis projects can be used as a resource to see how crochet is being used within the capacity of architecture. Emergent Explorations: Analog and Digital Scripting – Alexander Worden Research and Design: The Architecture of Variation – Lars Spuybroek YurtAlert – Kate Pokorny Styles in crochet Mosaic crochet Granny square Freeform crochet Motifs Crocheted lace Tunisian crochet Tapestry crochet Amigurumi Filet crochet Corner to Corner (C2C) Crochet Irish crochet lace Bead crochet Doily
Technology
Techniques_2
null
7439
https://en.wikipedia.org/wiki/Constructible%20number
Constructible number
In geometry and algebra, a real number is constructible if and only if, given a line segment of unit length, a line segment of length can be constructed with compass and straightedge in a finite number of steps. Equivalently, is constructible if and only if there is a closed-form expression for using only integers and the operations for addition, subtraction, multiplication, division, and square roots. The geometric definition of constructible numbers motivates a corresponding definition of constructible points, which can again be described either geometrically or algebraically. A point is constructible if it can be produced as one of the points of a compass and straightedge construction (an endpoint of a line segment or crossing point of two lines or circles), starting from a given unit length segment. Alternatively and equivalently, taking the two endpoints of the given segment to be the points (0, 0) and (1, 0) of a Cartesian coordinate system, a point is constructible if and only if its Cartesian coordinates are both constructible numbers. Constructible numbers and points have also been called ruler and compass numbers and ruler and compass points, to distinguish them from numbers and points that may be constructed using other processes. The set of constructible numbers forms a field: applying any of the four basic arithmetic operations to members of this set produces another constructible number. This field is a field extension of the rational numbers and in turn is contained in the field of algebraic numbers. It is the Euclidean closure of the rational numbers, the smallest field extension of the rationals that includes the square roots of all of its positive numbers. The proof of the equivalence between the algebraic and geometric definitions of constructible numbers has the effect of transforming geometric questions about compass and straightedge constructions into algebra, including several famous problems from ancient Greek mathematics. The algebraic formulation of these questions led to proofs that their solutions are not constructible, after the geometric formulation of the same problems previously defied centuries of attack. Geometric definitions Geometrically constructible points Let and be two given distinct points in the Euclidean plane, and define to be the set of points that can be constructed with compass and straightedge starting with and . Then the points of are called constructible points. and are, by definition, elements of . To more precisely describe the remaining elements of , make the following two definitions: a line segment whose endpoints are in is called a constructed segment, and a circle whose center is in and which passes through a point of (alternatively, whose radius is the distance between some pair of distinct points of ) is called a constructed circle. Then, the points of , besides and are: the intersection of two non-parallel constructed segments, or lines through constructed segments, the intersection points of a constructed circle and a constructed segment, or line through a constructed segment, or the intersection points of two distinct constructed circles. As an example, the midpoint of constructed segment is a constructible point. One construction for it is to construct two circles with as radius, and the line through the two crossing points of these two circles. Then the midpoint of segment is the point where this segment is crossed by the constructed line. Geometrically constructible numbers The starting information for the geometric formulation can be used to define a Cartesian coordinate system in which the point is associated to the origin having coordinates and in which the point is associated with the coordinates . The points of may now be used to link the geometry and algebra by defining a constructible number to be a coordinate of a constructible point. Equivalent definitions are that a constructible number is the -coordinate of a constructible point or the length of a constructible line segment. In one direction of this equivalence, if a constructible point has coordinates , then the point can be constructed as its perpendicular projection onto the -axis, and the segment from the origin to this point has length . In the reverse direction, if is the length of a constructible line segment, then intersecting the -axis with a circle centered at with radius gives the point . It follows from this equivalence that every point whose Cartesian coordinates are geometrically constructible numbers is itself a geometrically constructible point. For, when and are geometrically constructible numbers, point can be constructed as the intersection of lines through and , perpendicular to the coordinate axes. Algebraic definitions Algebraically constructible numbers The algebraically constructible real numbers are the subset of the real numbers that can be described by formulas that combine integers using the operations of addition, subtraction, multiplication, multiplicative inverse, and square roots of positive numbers. Even more simply, at the expense of making these formulas longer, the integers in these formulas can be restricted to be only 0 and 1. For instance, the square root of 2 is constructible, because it can be described by the formulas or . Analogously, the algebraically constructible complex numbers are the subset of complex numbers that have formulas of the same type, using a more general version of the square root that is not restricted to positive numbers but can instead take arbitrary complex numbers as its argument, and produces the principal square root of its argument. Alternatively, the same system of complex numbers may be defined as the complex numbers whose real and imaginary parts are both constructible real numbers. For instance, the complex number has the formulas or , and its real and imaginary parts are the constructible numbers 0 and 1 respectively. These two definitions of the constructible complex numbers are equivalent. In one direction, if is a complex number whose real part and imaginary part are both constructible real numbers, then replacing and by their formulas within the larger formula produces a formula for as a complex number. In the other direction, any formula for an algebraically constructible complex number can be transformed into formulas for its real and imaginary parts, by recursively expanding each operation in the formula into operations on the real and imaginary parts of its arguments, using the expansions , where and . Algebraically constructible points The algebraically constructible points may be defined as the points whose two real Cartesian coordinates are both algebraically constructible real numbers. Alternatively, they may be defined as the points in the complex plane given by algebraically constructible complex numbers. By the equivalence between the two definitions for algebraically constructible complex numbers, these two definitions of algebraically constructible points are also equivalent. Equivalence of algebraic and geometric definitions If and are the non-zero lengths of geometrically constructed segments then elementary compass and straightedge constructions can be used to obtain constructed segments of lengths , , , and . The latter two can be done with a construction based on the intercept theorem. A slightly less elementary construction using these tools is based on the geometric mean theorem and will construct a segment of length from a constructed segment of length . It follows that every algebraically constructible number is geometrically constructible, by using these techniques to translate a formula for the number into a construction for the number. In the other direction, a set of geometric objects may be specified by algebraically constructible real numbers: coordinates for points, slope and -intercept for lines, and center and radius for circles. It is possible (but tedious) to develop formulas in terms of these values, using only arithmetic and square roots, for each additional object that might be added in a single step of a compass-and-straightedge construction. It follows from these formulas that every geometrically constructible number is algebraically constructible. Algebraic properties The definition of algebraically constructible numbers includes the sum, difference, product, and multiplicative inverse of any of these numbers, the same operations that define a field in abstract algebra. Thus, the constructible numbers (defined in any of the above ways) form a field. More specifically, the constructible real numbers form a Euclidean field, an ordered field containing a square root of each of its positive elements. Examining the properties of this field and its subfields leads to necessary conditions on a number to be constructible, that can be used to show that specific numbers arising in classical geometric construction problems are not constructible. It is convenient to consider, in place of the whole field of constructible numbers, the subfield generated by any given constructible number , and to use the algebraic construction of to decompose this field. If is a constructible real number, then the values occurring within a formula constructing it can be used to produce a finite sequence of real numbers such that, for each , is an extension of of degree 2. Using slightly different terminology, a real number is constructible if and only if it lies in a field at the top of a finite tower of real quadratic extensions, starting with the rational field where is in and for all , . It follows from this decomposition that the degree of the field extension is , where counts the number of quadratic extension steps. Analogously to the real case, a complex number is constructible if and only if it lies in a field at the top of a finite tower of complex quadratic extensions. More precisely, is constructible if and only if there exists a tower of fields where is in , and for all , . The difference between this characterization and that of the real constructible numbers is only that the fields in this tower are not restricted to being real. Consequently, if a complex number a complex number is constructible, then the above characterization implies that is a power of two. However, this condition is not sufficient - there exist field extensions whose degree is a power of two, but which cannot be factored into a sequence of quadratic extensions. To obtain a sufficient condition for constructibility, one must instead consider the splitting field obtained by adjoining all roots of the minimal polynomial of . If the degree of extension is a power of two, then its Galois group is a 2-group, and thus admits a descending sequence of subgroups with for By the fundamental theorem of Galois theory, there is a corresponding tower of quadratic extensions whose topmost field contains and from this it follows that is constructible. The fields that can be generated from towers of quadratic extensions of are called of . The fields of real and complex constructible numbers are the unions of all real or complex iterated quadratic extensions of . Trigonometric numbers Trigonometric numbers are the cosines or sines of angles that are rational multiples of . These numbers are always algebraic, but they may not be constructible. The cosine or sine of the angle is constructible only for certain special numbers : The powers of two The Fermat primes, prime numbers that are one plus a power of two The products of powers of two and any number of distinct Fermat primes. Thus, for example, is constructible because 15 is the product of the Fermat primes 3 and 5; but is not constructible (not being the product of Fermat primes) and neither is (being a non-Fermat prime). Impossible constructions The ancient Greeks thought that certain problems of straightedge and compass construction they could not solve were simply obstinate, not unsolvable. However, the non-constructibility of certain numbers proves that these constructions are logically impossible to perform. (The problems themselves, however, are solvable using methods that go beyond the constraint of working only with straightedge and compass, and the Greeks knew how to solve them in this way. One such example is Archimedes' Neusis construction solution of the problem of Angle trisection.) In particular, the algebraic formulation of constructible numbers leads to a proof of the impossibility of the following construction problems: Doubling the cube The problem of doubling the unit square is solved by the construction of another square on the diagonal of the first one, with side length and area . Analogously, the problem of doubling the cube asks for the construction of the length of the side of a cube with volume . It is not constructible, because the minimal polynomial of this length, , has degree 3 over . As a cubic polynomial whose only real root is irrational, this polynomial must be irreducible, because if it had a quadratic real root then the quadratic conjugate would provide a second real root. Angle trisection In this problem, from a given angle , one should construct an angle . Algebraically, angles can be represented by their trigonometric functions, such as their sines or cosines, which give the Cartesian coordinates of the endpoint of a line segment forming the given angle with the initial segment. Thus, an angle is constructible when is a constructible number, and the problem of trisecting the angle can be formulated as one of constructing . For example, the angle of an equilateral triangle can be constructed by compass and straightedge, with . However, its trisection cannot be constructed, because has minimal polynomial of degree 3 over . Because this specific instance of the trisection problem cannot be solved by compass and straightedge, the general problem also cannot be solved. Squaring the circle A square with area , the same area as a unit circle, would have side length , a transcendental number. Therefore, this square and its side length are not constructible, because it is not algebraic over . Regular polygons If a regular -gon is constructed with its center at the origin, the angles between the segments from the center to consecutive vertices are . The polygon can be constructed only when the cosine of this angle is a trigonometric number. Thus, for instance, a 15-gon is constructible, but the regular heptagon is not constructible, because 7 is prime but not a Fermat prime. For a more direct proof of its non-constructibility, represent the vertices of a regular heptagon as the complex roots of the polynomial . Removing the factor , dividing by , and substituting gives the simpler polynomial , an irreducible cubic with three real roots, each two times the real part of a complex-number vertex. Its roots are not constructible, so the heptagon is also not constructible. Alhazen's problem If two points and a circular mirror are given, where on the circle does one of the given points see the reflected image of the other? Geometrically, the lines from each given point to the point of reflection meet the circle at equal angles and in equal-length chords. However, it is impossible to construct a point of reflection using a compass and straightedge. In particular, for a unit circle with the two points and inside it, the solution has coordinates forming roots of an irreducible degree-four polynomial . Although its degree is a power of two, the splitting field of this polynomial has degree divisible by three, so it does not come from an iterated quadratic extension and Alhazen's problem has no compass and straightedge solution. History The birth of the concept of constructible numbers is inextricably linked with the history of the three impossible compass and straightedge constructions: doubling the cube, trisecting an angle, and squaring the circle. The restriction of using only compass and straightedge in geometric constructions is often credited to Plato due to a passage in Plutarch. According to Plutarch, Plato gave the duplication of the cube (Delian) problem to Eudoxus and Archytas and Menaechmus, who solved the problem using mechanical means, earning a rebuke from Plato for not solving the problem using pure geometry. However, this attribution is challenged, due, in part, to the existence of another version of the story (attributed to Eratosthenes by Eutocius of Ascalon) that says that all three found solutions but they were too abstract to be of practical value. Proclus, citing Eudemus of Rhodes, credited Oenopides ( 450 BCE) with two ruler and compass constructions, leading some authors to hypothesize that Oenopides originated the restriction. The restriction to compass and straightedge is essential to the impossibility of the classic construction problems. Angle trisection, for instance, can be done in many ways, several known to the ancient Greeks. The Quadratrix of Hippias of Elis, the conics of Menaechmus, or the marked straightedge (neusis) construction of Archimedes have all been used, as has a more modern approach via paper folding. Although not one of the classic three construction problems, the problem of constructing regular polygons with straightedge and compass is often treated alongside them. The Greeks knew how to construct regular with (for any integer ), 3, 5, or the product of any two or three of these numbers, but other regular eluded them. In 1796 Carl Friedrich Gauss, then an eighteen-year-old student, announced in a newspaper that he had constructed a regular 17-gon with straightedge and compass. Gauss's treatment was algebraic rather than geometric; in fact, he did not actually construct the polygon, but rather showed that the cosine of a central angle was a constructible number. The argument was generalized in his 1801 book Disquisitiones Arithmeticae giving the condition for the construction of a regular Gauss claimed, but did not prove, that the condition was also necessary and several authors, notably Felix Klein, attributed this part of the proof to him as well. Alhazen's problem is also not one of the classic three problems, but despite being named after Ibn al-Haytham (Alhazen), a medieval Islamic mathematician, it already appears in Ptolemy's work on optics from the second century. Pierre Wantzel proved algebraically that the problems of doubling the cube and trisecting the angle are impossible to solve using only compass and straightedge. In the same paper he also solved the problem of determining which regular polygons are constructible: a regular polygon is constructible if and only if the number of its sides is the product of a power of two and any number of distinct Fermat primes (i.e., the sufficient conditions given by Gauss are also necessary). An attempted proof of the impossibility of squaring the circle was given by James Gregory in (The True Squaring of the Circle and of the Hyperbola) in 1667. Although his proof was faulty, it was the first paper to attempt to solve the problem using algebraic properties of . It was not until 1882 that Ferdinand von Lindemann rigorously proved its impossibility, by extending the work of Charles Hermite and proving that is a transcendental number. Alhazen's problem was not proved impossible to solve by compass and straightedge until the work of Jack Elkin. The study of constructible numbers, per se, was initiated by René Descartes in La Géométrie, an appendix to his book Discourse on the Method published in 1637. Descartes associated numbers to geometrical line segments in order to display the power of his philosophical method by solving an ancient straightedge and compass construction problem put forth by Pappus.
Mathematics
Basics
null
7445
https://en.wikipedia.org/wiki/Classification%20of%20finite%20simple%20groups
Classification of finite simple groups
In mathematics, the classification of finite simple groups (popularly called the enormous theorem) is a result of group theory stating that every finite simple group is either cyclic, or alternating, or belongs to a broad infinite class called the groups of Lie type, or else it is one of twenty-six exceptions, called sporadic (the Tits group is sometimes regarded as a sporadic group because it is not strictly a group of Lie type, in which case there would be 27 sporadic groups). The proof consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004. Simple groups can be seen as the basic building blocks of all finite groups, reminiscent of the way the prime numbers are the basic building blocks of the natural numbers. The Jordan–Hölder theorem is a more precise way of stating this fact about finite groups. However, a significant difference from integer factorization is that such "building blocks" do not necessarily determine a unique group, since there might be many non-isomorphic groups with the same composition series or, put in another way, the extension problem does not have a unique solution. Daniel Gorenstein (1923-1992), Richard Lyons, and Ronald Solomon are gradually publishing a simplified and revised version of the proof. Statement of the classification theorem The classification theorem has applications in many branches of mathematics, as questions about the structure of finite groups (and their action on other mathematical objects) can sometimes be reduced to questions about finite simple groups. Thanks to the classification theorem, such questions can sometimes be answered by checking each family of simple groups and each sporadic group. Daniel Gorenstein announced in 1983 that the finite simple groups had all been classified, but this was premature as he had been misinformed about the proof of the classification of quasithin groups. The completed proof of the classification was announced by after Aschbacher and Smith published a 1221-page proof for the missing quasithin case. Overview of the proof of the classification theorem wrote two volumes outlining the low rank and odd characteristic part of the proof, and wrote a 3rd volume covering the remaining characteristic 2 case. The proof can be broken up into several major pieces as follows: Groups of small 2-rank The simple groups of low 2-rank are mostly groups of Lie type of small rank over fields of odd characteristic, together with five alternating and seven characteristic 2 type and nine sporadic groups. The simple groups of small 2-rank include: Groups of 2-rank 0, in other words groups of odd order, which are all solvable by the Feit–Thompson theorem. Groups of 2-rank 1. The Sylow 2-subgroups are either cyclic, which is easy to handle using the transfer map, or generalized quaternion, which are handled with the Brauer–Suzuki theorem: in particular there are no simple groups of 2-rank 1 except for the cyclic group of order two. Groups of 2-rank 2. Alperin showed that the Sylow subgroup must be dihedral, quasidihedral, wreathed, or a Sylow 2-subgroup of U3(4). The first case was done by the Gorenstein–Walter theorem which showed that the only simple groups are isomorphic to L2(q) for q odd or A7, the second and third cases were done by the Alperin–Brauer–Gorenstein theorem which implies that the only simple groups are isomorphic to L3(q) or U3(q) for q odd or M11, and the last case was done by Lyons who showed that U3(4) is the only simple possibility. Groups of sectional 2-rank at most 4, classified by the Gorenstein–Harada theorem. The classification of groups of small 2-rank, especially ranks at most 2, makes heavy use of ordinary and modular character theory, which is almost never directly used elsewhere in the classification. All groups not of small 2 rank can be split into two major classes: groups of component type and groups of characteristic 2 type. This is because if a group has sectional 2-rank at least 5 then MacWilliams showed that its Sylow 2-subgroups are connected, and the balance theorem implies that any simple group with connected Sylow 2-subgroups is either of component type or characteristic 2 type. (For groups of low 2-rank the proof of this breaks down, because theorems such as the signalizer functor theorem only work for groups with elementary abelian subgroups of rank at least 3.) Groups of component type A group is said to be of component type if for some centralizer C of an involution, C/O(C) has a component (where O(C) is the core of C, the maximal normal subgroup of odd order). These are more or less the groups of Lie type of odd characteristic of large rank, and alternating groups, together with some sporadic groups. A major step in this case is to eliminate the obstruction of the core of an involution. This is accomplished by the B-theorem, which states that every component of C/O(C) is the image of a component of C. The idea is that these groups have a centralizer of an involution with a component that is a smaller quasisimple group, which can be assumed to be already known by induction. So to classify these groups one takes every central extension of every known finite simple group, and finds all simple groups with a centralizer of involution with this as a component. This gives a rather large number of different cases to check: there are not only 26 sporadic groups and 16 families of groups of Lie type and the alternating groups, but also many of the groups of small rank or over small fields behave differently from the general case and have to be treated separately, and the groups of Lie type of even and odd characteristic are also quite different. Groups of characteristic 2 type A group is of characteristic 2 type if the generalized Fitting subgroup F*(Y) of every 2-local subgroup Y is a 2-group. As the name suggests these are roughly the groups of Lie type over fields of characteristic 2, plus a handful of others that are alternating or sporadic or of odd characteristic. Their classification is divided into the small and large rank cases, where the rank is the largest rank of an odd abelian subgroup normalizing a nontrivial 2-subgroup, which is often (but not always) the same as the rank of a Cartan subalgebra when the group is a group of Lie type in characteristic 2. The rank 1 groups are the thin groups, classified by Aschbacher, and the rank 2 ones are the notorious quasithin groups, classified by Aschbacher and Smith. These correspond roughly to groups of Lie type of ranks 1 or 2 over fields of characteristic 2. Groups of rank at least 3 are further subdivided into 3 classes by the trichotomy theorem, proved by Aschbacher for rank 3 and by Gorenstein and Lyons for rank at least 4. The three classes are groups of GF(2) type (classified mainly by Timmesfeld), groups of "standard type" for some odd prime (classified by the Gilman–Griess theorem and work by several others), and groups of uniqueness type, where a result of Aschbacher implies that there are no simple groups. The general higher rank case consists mostly of the groups of Lie type over fields of characteristic 2 of rank at least 3 or 4. Existence and uniqueness of the simple groups The main part of the classification produces a characterization of each simple group. It is then necessary to check that there exists a simple group for each characterization and that it is unique. This gives a large number of separate problems; for example, the original proofs of existence and uniqueness of the monster group totaled about 200 pages, and the identification of the Ree groups by Thompson and Bombieri was one of the hardest parts of the classification. Many of the existence proofs and some of the uniqueness proofs for the sporadic groups originally used computer calculations, most of which have since been replaced by shorter hand proofs. History of the proof Gorenstein's program In 1972 announced a program for completing the classification of finite simple groups, consisting of the following 16 steps: Groups of low 2-rank. This was essentially done by Gorenstein and Harada, who classified the groups with sectional 2-rank at most 4. Most of the cases of 2-rank at most 2 had been done by the time Gorenstein announced his program. The semisimplicity of 2-layers. The problem is to prove that the 2-layer of the centralizer of an involution in a simple group is semisimple. Standard form in odd characteristic. If a group has an involution with a 2-component that is a group of Lie type of odd characteristic, the goal is to show that it has a centralizer of involution in "standard form" meaning that a centralizer of involution has a component that is of Lie type in odd characteristic and also has a centralizer of 2-rank 1. Classification of groups of odd type. The problem is to show that if a group has a centralizer of involution in "standard form" then it is a group of Lie type of odd characteristic. This was solved by Aschbacher's classical involution theorem. Quasi-standard form Central involutions Classification of alternating groups. Some sporadic groups Thin groups. The simple thin finite groups, those with 2-local p-rank at most 1 for odd primes p, were classified by Aschbacher in 1978 Groups with a strongly p-embedded subgroup for p odd The signalizer functor method for odd primes. The main problem is to prove a signalizer functor theorem for nonsolvable signalizer functors. This was solved by McBride in 1982. Groups of characteristic p type. This is the problem of groups with a strongly p-embedded 2-local subgroup with p odd, which was handled by Aschbacher. Quasithin groups. A quasithin group is one whose 2-local subgroups have p-rank at most 2 for all odd primes p, and the problem is to classify the simple ones of characteristic 2 type. This was completed by Aschbacher and Smith in 2004. Groups of low 2-local 3-rank. This was essentially solved by Aschbacher's trichotomy theorem for groups with e(G)=3. The main change is that 2-local 3-rank is replaced by 2-local p-rank for odd primes. Centralizers of 3-elements in standard form. This was essentially done by the Trichotomy theorem. Classification of simple groups of characteristic 2 type. This was handled by the Gilman–Griess theorem, with 3-elements replaced by p-elements for odd primes. Timeline of the proof Many of the items in the table below are taken from . The date given is usually the publication date of the complete proof of a result, which is sometimes several years later than the proof or first announcement of the result, so some of the items appear in the "wrong" order. Second-generation classification The proof of the theorem, as it stood around 1985 or so, can be called first generation. Because of the extreme length of the first generation proof, much effort has been devoted to finding a simpler proof, called a second-generation classification proof. This effort, called "revisionism", was originally led by Daniel Gorenstein. , ten volumes of the second generation proof have been published (Gorenstein, Lyons & Solomon 1994, 1996, 1998, 1999, 2002, 2005, 2018a, 2018b; & Capdeboscq, 2021, 2023). In 2012 Solomon estimated that the project would need another 5 volumes, but said that progress on them was slow. It is estimated that the new proof will eventually fill approximately 5,000 pages. (This length stems in part from the second generation proof being written in a more relaxed style.) However, with the publication of volume 9 of the GLS series, and including the Aschbacher–Smith contribution, this estimate was already reached, with several more volumes still in preparation (the rest of what was originally intended for volume 9, plus projected volumes 10 and 11). Aschbacher and Smith wrote their two volumes devoted to the quasithin case in such a way that those volumes can be part of the second generation proof. Gorenstein and his collaborators have given several reasons why a simpler proof is possible. The most important thing is that the correct, final statement of the theorem is now known. Simpler techniques can be applied that are known to be adequate for the types of groups we know to be finite simple. In contrast, those who worked on the first generation proof did not know how many sporadic groups there were, and in fact some of the sporadic groups (e.g., the Janko groups) were discovered while proving other cases of the classification theorem. As a result, many of the pieces of the theorem were proved using techniques that were overly general. Because the conclusion was unknown, the first generation proof consists of many stand-alone theorems, dealing with important special cases. Much of the work of proving these theorems was devoted to the analysis of numerous special cases. Given a larger, orchestrated proof, dealing with many of these special cases can be postponed until the most powerful assumptions can be applied. The price paid under this revised strategy is that these first generation theorems no longer have comparatively short proofs, but instead rely on the complete classification. Many first generation theorems overlap, and so divide the possible cases in inefficient ways. As a result, families and subfamilies of finite simple groups were identified multiple times. The revised proof eliminates these redundancies by relying on a different subdivision of cases. Finite group theorists have more experience at this sort of exercise, and have new techniques at their disposal. has called the work on the classification problem by Ulrich Meierfrankenfeld, Bernd Stellmacher, Gernot Stroth, and a few others, a third generation program. One goal of this is to treat all groups in characteristic 2 uniformly using the amalgam method. Length of proof Gorenstein has discussed some of the reasons why there might not be a short proof of the classification similar to the classification of compact Lie groups. The most obvious reason is that the list of simple groups is quite complicated: with 26 sporadic groups there are likely to be many special cases that have to be considered in any proof. So far no one has yet found a clean uniform description of the finite simple groups similar to the parameterization of the compact Lie groups by Dynkin diagrams. Atiyah and others have suggested that the classification ought to be simplified by constructing some geometric object that the groups act on and then classifying these geometric structures. The problem is that no one has been able to suggest an easy way to find such a geometric structure associated with a simple group. In some sense, the classification does work by finding geometric structures such as BN-pairs, but this only comes at the end of a very long and difficult analysis of the structure of a finite simple group. Another suggestion for simplifying the proof is to make greater use of representation theory. The problem here is that representation theory seems to require very tight control over the subgroups of a group in order to work well. For groups of small rank, one has such control and representation theory works very well, but for groups of larger rank no-one has succeeded in using it to simplify the classification. In the early days of the classification, there was a considerable effort made to use representation theory, but this never achieved much success in the higher rank case. Consequences of the classification This section lists some results that have been proved using the classification of finite simple groups. The Schreier conjecture The Signalizer functor theorem The B conjecture The Schur–Zassenhaus theorem for all groups (though this only uses the Feit–Thompson theorem). A transitive permutation group on a finite set with more than 1 element has a fixed-point-free element of prime power order. The classification of 2-transitive permutation groups. The classification of rank 3 permutation groups. The Sims conjecture Frobenius's conjecture on the number of solutions of .
Mathematics
Algebra
null
7455
https://en.wikipedia.org/wiki/Chaparral
Chaparral
Chaparral ( ) is a shrubland plant community found primarily in California, in southern Oregon and in the northern portion of the Baja California Peninsula in Mexico. It is shaped by a Mediterranean climate (mild wet winters and hot dry summers) and infrequent, high-intensity crown fires. Many chaparral shrubs have hard sclerophyllous evergreen leaves, as contrasted with the associated soft-leaved, drought-deciduous, scrub community of coastal sage scrub, found often on drier, southern facing slopes. Three other closely related chaparral shrubland systems occur in southern Arizona, western Texas, and along the eastern side of central Mexico's mountain chains, all having summer rains in contrast to the Mediterranean climate of other chaparral formations. Chaparral comprises 9% of California's wildland vegetation and contains 20% of its plant species. Etymology The name comes from the Spanish word , which translates to "place of the scrub oak". Introduction In its natural state, chaparral is characterized by infrequent fires, with natural fire return intervals ranging between 30 years and over 150 years. Mature chaparral (at least 60 years since time of last fire) is characterized by nearly impenetrable, dense thickets (except the more open desert chaparral). These plants are flammable during the late summer and autumn months when conditions are characteristically hot and dry. They grow as woody shrubs with thick, leathery, and often small leaves, contain green leaves all year (are evergreen), and are typically drought resistant (with some exceptions). After the first rains following a fire, the landscape is dominated by small flowering herbaceous plants, known as fire followers, which die back with the summer dry period. Similar plant communities are found in the four other Mediterranean climate regions around the world, including the Mediterranean Basin (where it is known as ), central Chile (where it is called ), the South African Cape Region (known there as ), and in Western and Southern Australia (as ). According to the California Academy of Sciences, Mediterranean shrubland contains more than 20 percent of the world's plant diversity. The word chaparral is a loanword from Spanish , meaning place of the scrub oak, which itself comes from a Basque word, , that has the same meaning. Conservation International and other conservation organizations consider chaparral to be a biodiversity hotspot – a biological community with a large number of different species – that is under threat by human activity. California chaparral California chaparral and woodlands ecoregion The California chaparral and woodlands ecoregion, of the Mediterranean forests, woodlands, and scrub biome, has three sub-ecoregions with ecosystem–plant community subdivisions: California coastal sage and chaparral:In coastal Southern California and northwestern coastal Baja California, as well as all of the Channel Islands off California and Guadalupe Island (Mexico). California montane chaparral and woodlands:In southern and central coast adjacent and inland California regions, including covering some of the mountains of the California Coast Ranges, the Transverse Ranges, and the western slopes of the northern Peninsular Ranges. California interior chaparral and woodlands:In central interior California surrounding the Central Valley, covering the foothills and lower slopes of the northeastern Transverse Ranges and the western Sierra Nevada range. Chaparral and woodlands biota For the numerous individual plant and animal species found within the California chaparral and woodlands ecoregion, see: Flora of the California chaparral and woodlands Fauna of the California chaparral and woodlands. Some of the indicator plants of the California chaparral and woodlands ecoregion include: Quercus species – oaks: Quercus agrifolia – coast live oak Quercus berberidifolia – scrub oak Quercus chrysolepis – canyon live oak Quercus douglasii – blue oak Quercus wislizeni – interior live oak Artemisia species – sagebrush: Artemisia californica – California sagebrush, coastal sage brush Arctostaphylos species – manzanitas: Arctostaphylos glauca – bigberry manzanita Arctostaphylos manzanita – common manzanita Ceanothus species – California lilacs: Ceanothus cuneatus – buckbrush Ceanothus megacarpus – bigpod ceanothus Rhus species – sumacs: Rhus integrifolia – lemonade berry Rhus ovata – sugar bush Eriogonum species – buckwheats: Eriogonum fasciculatum – California buckwheat Salvia species – sages: Salvia mellifera – Californian black sage Chaparral soils and nutrient composition Chaparral characteristically is found in areas with steep topography and shallow stony soils, while adjacent areas with clay soils, even where steep, tend to be colonized by annual plants and grasses. Some chaparral species are adapted to nutrient-poor soils developed over serpentine and other ultramafic rock, with a high ratio of magnesium and iron to calcium and potassium, that are also generally low in essential nutrients such as nitrogen. California cismontane and transmontane chaparral subdivisions Another phytogeography system uses two California chaparral and woodlands subdivisions: the cismontane chaparral and the transmontane (desert) chaparral. California cismontane chaparral Cismontane chaparral ("this side of the mountain") refers to the chaparral ecosystem in the Mediterranean forests, woodlands, and scrub biome in California, growing on the western (and coastal) sides of large mountain range systems, such as the western slopes of the Sierra Nevada in the San Joaquin Valley foothills, western slopes of the Peninsular Ranges and California Coast Ranges, and south-southwest slopes of the Transverse Ranges in the Central Coast and Southern California regions. Cismontane chaparral plant species In Central and Southern California chaparral forms a dominant habitat. Members of the chaparral biota native to California, all of which tend to regrow quickly after fires, include: Adenostoma fasciculatum, chamise Adenostoma sparsifolium, redshanks Arctostaphylos spp., manzanita Ceanothus spp., ceanothus Cercocarpus spp., mountain mahogany Cneoridium dumosum, bush rue Eriogonum fasciculatum, California buckwheat Garrya spp., silk-tassel bush Hesperoyucca whipplei, yucca Heteromeles arbutifolia, toyon Acmispon glaber, deerweed Malosma laurina, laurel sumac Marah macrocarpus, wild cucumber Mimulus aurantiacus, bush monkeyflower Pickeringia montana, chaparral pea Prunus ilicifolia, islay or hollyleaf cherry Quercus berberidifolia, scrub oak Q. dumosa, scrub oak Q. wislizenii var. frutescens Rhamnus californica, California coffeeberry Rhus integrifolia, lemonade berry Rhus ovata, sugar bush Salvia apiana, Californian white sage Salvia mellifera, Californian black sage Xylococcus bicolor, mission manzanita Cismontane chaparral bird species The complex ecology of chaparral habitats supports a very large number of animal species. The following is a short list of birds which are an integral part of the cismontane chaparral ecosystems. Characteristic chaparral bird species include: Wrentit (Chamaea fasciata) California thrasher (Toxostoma redivivum) California towhee (Melozone crissalis) Spotted towhee (Pipilo maculatus) California scrub jay (Aphelocoma californica) Other common chaparral bird species include: Anna's hummingbird (Calypte anna) Bewick's wren (Thryomanes bewickii) Bushtit (Psaltriparus minimus) Costa's hummingbird (Calypte costae) Greater roadrunner (Geococcyx californianus) California transmontane (desert) chaparral Transmontane chaparral or desert chaparral—transmontane ("the other side of the mountain") chaparral—refers to the desert shrubland habitat and chaparral plant community growing in the rainshadow of these ranges. Transmontane chaparral features xeric desert climate, not Mediterranean climate habitats, and is also referred to as desert chaparral. Desert chaparral is a regional ecosystem subset of the deserts and xeric shrublands biome, with some plant species from the California chaparral and woodlands ecoregion. Unlike cismontane chaparral, which forms dense, impenetrable stands of plants, desert chaparral is often open, with only about 50 percent of the ground covered. Individual shrubs can reach up to in height. Transmontane chaparral or desert chaparral is found on the eastern slopes of major mountain range systems on the western sides of the deserts of California. The mountain systems include the southeastern Transverse Ranges (the San Bernardino and San Gabriel Mountains) in the Mojave Desert north and northeast of the Los Angeles basin and Inland Empire; and the northern Peninsular Ranges (San Jacinto, Santa Rosa, and Laguna Mountains), which separate the Colorado Desert (western Sonoran Desert) from lower coastal Southern California. It is distinguished from the cismontane chaparral found on the coastal side of the mountains, which experiences higher winter rainfall. Naturally, desert chaparral experiences less winter rainfall than cismontane chaparral. Plants in this community are characterized by small, hard (sclerophyllic) evergreen (non-deciduous) leaves. Desert chaparral grows above California's desert cactus scrub plant community and below the pinyon-juniper woodland. It is further distinguished from the deciduous sub-alpine scrub above the pinyon-juniper woodlands on the same side of the Peninsular ranges. Due to the lower annual rainfall (resulting in slower plant growth rates) when compared to cismontane chaparral, desert chaparral is more vulnerable to biodiversity loss and the invasion of non-native weeds and grasses if disturbed by human activity and frequent fire. Transmontane chaparral distribution Transmontane (desert) chaparral typically grows on the lower ( elevation) northern slopes of the southern Transverse Ranges (running east to west in San Bernardino and Los Angeles counties) and on the lower () eastern slopes of the Peninsular Ranges (running south to north from lower Baja California to Riverside and Orange counties and the Transverse Ranges). It can also be found in higher-elevation sky islands in the interior of the deserts, such as in the upper New York Mountains within the Mojave National Preserve in the Mojave Desert. The California transmontane (desert) chaparral is found in the rain shadow deserts of the following: Sierra Nevada creating the Great Basin Desert and northern Mojave Desert Transverse Ranges creating the western through eastern Mojave Desert Peninsular Ranges creating the Colorado Desert and Yuha Desert. Transmontane chaparral plants Adenostoma fasciculatum, chamise (a low shrub common to most chaparral with clusters of tiny needle like leaves or fascicles; similar in appearance to coastal Eriogonum fasciculatum) Agave deserti, desert agave Arctostaphylos glauca, bigberry manzanita (smooth red bark with large edible berries; glauca means blue-green, the color of its leaves) Ceanothus greggii, desert ceanothus, California lilac (a nitrogen fixer, has hair on both sides of leaves for heat dissipation) Cercocarpus ledifolius, curl leaf mountain mahogany, a nitrogen fixer important food source for desert bighorn sheep Dendromecon rigida, bush poppy (a fire follower with four petaled yellow flowers) Ephedra spp., Mormon teas Fremontodendron californicum, California flannel bush (lobed leaves with fine coating of hair, covered with yellow blossoms in spring) Opuntia acanthocarpa, buckhorn cholla (branches resemble antlers of a deer) Opuntia echinocarpa, silver or golden cholla (depending on color of the spines) Opuntia phaeacantha, desert prickly pear (fruit is important food source for animals) Purshia tridentata, buckbrush, antelope bitterbrush (Rosaceae family) Prunus fremontii, desert apricot Prunus fasciculata, desert almond (commonly infested with tent caterpillars of Malacosoma spp.) Prunus ilicifolia, holly-leaf cherry Quercus cornelius-mulleri, desert scrub oak or Muller's oak Rhus ovata, sugar bush Simmondsia chinensis, jojoba Yucca schidigera, Mojave yucca Hesperoyucca whipplei (syn. Yucca whipplei), foothill yucca – our lord's candle. Transmontane chaparral animals There is overlap of animals with those of the adjacent desert and pinyon-juniper communities. Canis latrans, coyote Lynx rufus, bobcat Neotoma sp., desert pack rat Odocoileus hemionus, mule deer Peromyscus truei, pinyon mouse Puma concolor, mountain lion Stagmomantis californica, California mantis Fire Chaparral is a coastal biome with hot, dry summers and mild, rainy winters. The chaparral area receives about of precipitation a year. This makes the chaparral most vulnerable to fire in the late summer and fall. The chaparral ecosystem as a whole is adapted to be able to recover from naturally infrequent, high-intensity fire (fires occurring between 30 and 150 years or more apart); indeed, chaparral regions are known culturally and historically for their impressive fires. (This does create a conflict with human development adjacent to and expanding into chaparral systems.) Additionally, Native Americans burned chaparral near villages on the coastal plain to promote plant species for textiles and food. Before a major fire, typical chaparral plant communities are dominated by manzanita, chamise Adenostoma fasciculatum and Ceanothus species, toyon (which can sometimes be interspersed with scrub oaks), and other drought-resistant shrubs with hard (sclerophyllous) leaves; these plants resprout (see resprouter) from underground burls after a fire. Plants that are long-lived in the seed bank or serotinous with induced germination after fire include chamise, Ceanothus, and fiddleneck. Some chaparral plant communities may grow so dense and tall that it becomes difficult for large animals and humans to penetrate, but may be teeming with smaller fauna in the understory. The seeds of many chaparral plant species are stimulated to germinate by some fire cue (heat or the chemicals from smoke or charred wood). During the time shortly after a fire, chaparral communities may contain soft-leaved herbaceous, fire following annual wildflowers and short-lived perennials that dominate the community for the first few years – until the burl resprouts and seedlings of chaparral shrub species create a mature, dense overstory. Seeds of annuals and shrubs lie dormant until the next fire creates the conditions needed for germination. Several shrub species such as Ceanothus fix nitrogen, increasing the availability of nitrogen compounds in the soil. Because of the hot, dry conditions that exist in the California summer and fall, chaparral is one of the most fire-prone plant communities in North America. Some fires are caused by lightning, but these are usually during periods of high humidity and low winds and are easily controlled. Nearly all of the very large wildfires are caused by human activity during periods of hot, dry easterly Santa Ana winds. These human-caused fires are commonly ignited by power line failures, vehicle fires and collisions, sparks from machinery, arson, or campfires. Threatened by high fire frequency Though adapted to infrequent fires, chaparral plant communities can be eliminated by frequent fires. A high frequency of fire (less than 10-15 years apart) will result in the loss of obligate seeding shrub species such as Manzanita spp. This high frequency disallows seeder plants to reach their reproductive size before the next fire and the community shifts to a sprouter-dominance. If high frequency fires continue over time, obligate resprouting shrub species can also be eliminated by exhausting their energy reserves below-ground. Today, frequent accidental ignitions can convert chaparral from a native shrubland to non-native annual grassland and drastically reduce species diversity, especially under drought brought about by climate change. Wildfire debate There are two older hypotheses relating to California chaparral fire regimes that caused considerable debate in the past within the fields of wildfire ecology and land management. Research over the past two decades have rejected these hypotheses: That older stands of chaparral become "senescent" or "decadent", thus implying that fire is necessary for the plants to remain healthy, That wildfire suppression policies have allowed dead chaparral to accumulate unnaturally, creating ample fuel for large fires. The perspective that older chaparral is unhealthy or unproductive may have originated during the 1940s when studies were conducted measuring the amount of forage available to deer populations in chaparral stands. However, according to recent studies, California chaparral is extraordinarily resilient to very long periods without fire and continues to maintain productive growth throughout pre-fire conditions. Seeds of many chaparral plants actually require 30 years or more worth of accumulated leaf litter before they will successfully germinate (e.g., scrub oak, Quercus berberidifolia; toyon, Heteromeles arbutifolia; and holly-leafed cherry, Prunus ilicifolia). When intervals between fires drop below 10 to 15 years, many chaparral species are eliminated and the system is typically replaced by non-native, invasive, weedy grassland. The idea that older chaparral is responsible for causing large fires was originally proposed in the 1980s by comparing wildfires in Baja California and southern California. It was suggested that fire suppression activities in southern California allowed more fuel to accumulate, which in turn led to larger fires. This is similar to the observation that fire suppression and other human-caused disturbances in dry, ponderosa pine forests in the Southwest of the United States has unnaturally increased forest density. Historically, mixed-severity fires likely burned through these forests every decade or so, burning understory plants, small trees, and downed logs at low-severity, and patches of trees at high-severity. However, chaparral has a high-intensity crown-fire regime, meaning that fires consume nearly all the above ground growth whenever they burn, with a historical frequency of 30 to 150 years or more. A detailed analysis of historical fire data concluded that fire suppression activities have been ineffective at excluding fire from southern California chaparral, unlike in ponderosa pine forests. In addition, the number of fires is increasing in step with population growth and exacerbated by climate change. Chaparral stand age does not have a significant correlation to its tendency to burn. Large, infrequent, high-intensity wildfires are part of the natural fire regime for California chaparral. Extreme weather conditions (low humidity, high temperature, high winds), drought, and low fuel moisture are the primary factors in determining how large a chaparral fire becomes.
Physical sciences
Biomes: General
Earth science
7461
https://en.wikipedia.org/wiki/Clipper
Clipper
A clipper was a type of mid-19th-century merchant sailing vessel, designed for speed. The term was also retrospectively applied to the Baltimore clipper, which originated in the late 18th century. Clippers were generally narrow for their length, small by later 19th-century standards, could carry limited bulk freight, and had a large total sail area. "Clipper" does not refer to a specific sailplan; clippers may be schooners, brigs, brigantines, etc., as well as full-rigged ships. Clippers were mostly constructed in British and American shipyards, although France, Brazil, the Netherlands, and other nations also produced some. Clippers sailed all over the world, primarily on the trade routes between the United Kingdom and China, in transatlantic trade, and on the New York-to-San Francisco route around Cape Horn during the California gold rush. Dutch clippers were built beginning in the 1850s for the tea trade and passenger service to Java. The boom years of the clipper era began in 1843 in response to a growing demand for faster delivery of tea from China and continued with the demand for swift passage to gold fields in California and Australia beginning in 1848 and 1851, respectively. The era ended with the opening of the Suez Canal in 1869. Origin and usage of "clipper" The etymological origin of the word clipper is uncertain, but is believed to be derived from the English language verb "to clip", which at the time meant "to run or fly swiftly". The first application of the term "clipper", in a nautical sense, is likewise uncertain. The type known as the Baltimore clipper originated at the end of the 18th century on the eastern seaboard of the USA. At first, these fast sailing vessels were referred to as "Virginia-built" or "pilot-boat model", with the name "Baltimore-built" appearing during the War of 1812. In the final days of the slave trade (circa 1835–1850)just as the type was dying outthe term, Baltimore clipper, became common. The common retrospective application of the word "clipper" to this type of vessel has caused confusion. The Oxford English Dictionary's earliest quote (referring to the Baltimore clipper) is from 1824. The dictionary cites Royal Navy officer and novelist Frederick Marryat as using the term in 1830. British newspaper usage of the term can be found as early as 1832 and in shipping advertisements from 1835. A US court case of 1834 has evidence that discusses a clipper being faster than a brig. Definitions A clipper is a sailing vessel designed for speed, a priority that takes precedence over cargo-carrying capacity or building or operating costs. It is not restricted to any one rig (while many were fully rigged ships, others were barques, brigs, or schooners), nor was the term restricted to any one hull type. Howard Chapelle lists three basic hull types for clippers. The first was characterised by the sharp and ends found in the Baltimore clipper. The second was a hull with a full midsection and modest deadrise, but sharp endsthis was a development of the hull form of transatlantic packets. The third was more experimental, with deadrise and sharpness being balanced against the need to carry a profitable quantity of cargo. A clipper carried a large sail area and a fast hull; by the standards of any other type of sailing ship, a clipper was greatly over-canvassed. The last defining feature of a clipper, in the view of maritime historian David MacGregor, was a captain who had the courage, skill, and determination to get the fastest speed possible out of her. In assessing the hull of a clipper, different maritime historians use different criteria to measure "sharpness", "fine lines" or "fineness", a concept which is explained by comparing a rectangular cuboid with the underwater shape of a vessel's hull. The more material one has to carve off the cuboid to achieve the hull shape, the sharper the hull. Ideally, a maritime historian would be able to look at either the block coefficient of fineness or the prismatic coefficient of various clippers, but measured drawings or accurate half models may not exist to calculate either of these figures. An alternative measure of sharpness for hulls of a broadly similar shape is the coefficient of underdeck tonnage, as used by David MacGregor in comparing tea clippers. This could be calculated from the measurements taken to determine the registered tonnage, so can be applied to more vessels. An extreme clipper has a hull of great fineness, as judged either by the prismatic coefficient, the coefficient of underdeck tonnage, or some other technical assessment of hull shape. This term has been misapplied in the past, without reference to hull shape. As commercial vessels, these are totally reliant on speed to generate a profit for their owners, as their sharpness limits their cargo-carrying capacity. A medium clipper has a cargo-carrying hull that has some sharpness. In the right conditions and with a capable captain, some of these achieved notable quick passages. They were also able to pay their way when the high freight rates often paid to a fast sailing ship were not available (in a fluctuating market). The term "clipper" applied to vessels between these two categories. They often made passages as fast as extreme clippers, but had less difficulty in making a living when freight rates were lower. History The first ships to which the term "clipper" seems to have been applied were the Baltimore clippers, developed in the Chesapeake Bay before the American Revolution, and reached their zenith between 1795 and 1815. They were small, rarely exceeding 200 tons OM. Their hulls were sharp ended and displayed much deadrise. They were rigged as schooners, brigs, or brigantines. In the War of 1812, some were lightly armed, sailing under letters of marque and reprisal, when the typeexemplified by Chasseur, launched at Fells Point, Baltimore in 1814became known for her incredible speed; the deep draft enabled the Baltimore clipper to sail close to the wind. Clippers, running the British blockade of Baltimore, came to be recognized for speed rather than cargo space. The type existed as early as 1780. A 1789 drawing of purchased by the Royal Navy in 1780 in the West Indiesrepresents the earliest draught of what became known as the Baltimore clipper. Vessels of the Baltimore clipper type continued to be built for the slave trade, being useful for escaping enforcement of the British and American legislation prohibiting the trans-Atlantic slave trade. Some of these Baltimore clippers were captured when working as slavers, condemned by the appropriate court, and sold to owners who then used them as opium clippersmoving from one illegal international trade to another. Ann McKim, built in Baltimore in 1833 by the Kennard & Williamson shipyard, is considered by some to be the original clipper ship. (Maritime historians Howard I. Chapelle and David MacGregor decry the concept of the "first" clipper, preferring a more evolutionary, multiple-step development of the type.) She measured 494 tons OM, and was built on the enlarged lines of a Baltimore clipper, with sharply raked stem, counter stern, and square rig. Although Ann McKim was the first large clipper ship ever constructed, she cannot be said to have founded the clipper ship era, or even that she directly influenced shipbuilders, since no other ship was built like her, but she may have suggested the clipper design in vessels of ship rig. She did, however, influence the building of Rainbow in 1845, the first extreme clipper ship. In Aberdeen, Scotland, shipbuilders Alexander Hall and Sons developed the "Aberdeen" clipper bow in the late 1830s; the first was Scottish Maid launched in 1839. Scottish Maid, 150 tons OM, was the first British clipper ship. "Scottish Maid was intended for the Aberdeen-London trade, where speed was crucial to compete with steamships. The Hall brothers tested various hulls in a water tank and found the clipper design most effective. The design was influenced by tonnage regulations. Tonnage measured a ship's cargo capacity and was used to calculate tax and harbour dues. The new 1836 regulations measured depth and breadth with length measured at half midship depth. Extra length above this level was tax-free and became a feature of clippers. Scottish Maid proved swift and reliable and the design was widely copied." The earliest British clipper ships were built for trade within the British Isles (Scottish Maid was built for the Aberdeen to London trade). Then followed the vast clipper trade of tea, opium, spices, and other goods from the Far East to Europe, and the ships became known as "tea clippers". From 1839, larger American clipper ships started to be built beginning with Akbar, 650 tons OM, in 1839, and including the 1844-built Houqua, 581 tons OM. These larger vessels were built predominantly for use in the China tea trade and known as "tea clippers". Then in 1845 Rainbow, 757 tons OM, the first extreme clipper, was launched in New York. These American clippers were larger vessels designed to sacrifice cargo capacity for speed. They had a bow lengthened above the water, a drawing out and sharpening of the forward body, and the greatest breadth further aft. Extreme clippers were built in the period 1845 to 1855. In 1851, shipbuilders in Medford, Massachusetts, built what is sometimes called one of the first medium clippers, the Antelope, often called the Antelope of Boston to distinguish her from other ships of the same name. A contemporary ship-design journalist noted that "the design of her model was to combine large stowage capacity with good sailing qualities." Antelope was relatively flat-floored and had only an 8-inch deadrise at half-floor. The medium clipper, though still very fast, could carry more cargo. After 1854, extreme clippers were replaced in American shipbuilding yards by medium clippers. The Flying Cloud was a clipper ship built in 1851 that established the fastest passage between New York and San Francisco within weeks of her launching, then broke her own records three years later, which stood at 89 days 8 hours until 1989. (The other contender for this "blue ribbon" title was the medium clipper Andrew Jacksonan unresolvable argument exists over timing these voyages "from pilot to pilot"). Flying Cloud was the most famous of the clippers built by Donald McKay. She was known for her extremely close race with the Hornet in 1853; for having a woman navigator, Eleanor Creesy, wife of Josiah Perkins Creesy, who skippered the Flying Cloud on two record-setting voyages from New York to San Francisco; and for sailing in the Australia and timber trades. Clipper ships largely ceased being built in American shipyards in 1859 when, unlike the earlier boom years, only four clipper ships were built; a few were built in the 1860s. British clipper ships continued to be built after 1859. From 1859, a new design was developed for British clipper ships that was nothing like the American clippers; these ships continued to be called extreme clippers. The new design had a sleek, graceful appearance, less sheer, less freeboard, lower bulwarks, and smaller breadth. They were built for the China tea trade, starting with Falcon in 1859, and continuing until 1870. The earlier ships were made from wood, though some were made from iron, just as some British clippers had been made from iron prior to 1859. In 1863, the first tea clippers of composite construction were brought out, combining the best of both worlds. Composite clippers had the strength of an iron hull framework but with wooden planking that, with properly insulated fastenings, could use copper sheathing without the problem of galvanic corrosion. Copper sheathing prevented fouling and teredo worm, but could not be used on iron hulls. The iron framework of composite clippers was less bulky and lighter, so allowing more cargo in a hull of the same external shape. After 1869, with the opening of the Suez Canal that greatly advantaged steam vessels (see Decline below), the tea trade collapsed for clippers. From the late 1860s until the early 1870s, the clipper trade increasingly focused on the Britain to Australia and New Zealand route, carrying goods and immigrants, services that had begun earlier with the Australian Gold Rush of the 1850s. British-built clipper ships and many American-built, British-owned ships were used. Even in the 1880s, sailing ships were still the main carriers of cargo between Britain, and Australia and New Zealand. This trade eventually became unprofitable, and the ageing clipper fleet became unseaworthy. Opium clippers Before the early 18th century, the East India Company paid for its tea mainly in silver. When the Chinese emperor chose to embargo European-manufactured commodities and demand payment for all Chinese goods in silver, the price rose, restricting trade. The East India Company began to produce opium in India, something desired by the Chinese as much as tea was by the British. This had to be smuggled into China on smaller, fast-sailing ships, called "opium clippers". Some of these were built specifically for the purposemostly in India and Britain, such as the 1842-built Ariel, 100 tons OM. Some fruit schooners were bought for this trade, as were some Baltimore clippers. China clippers and the apogee of sail Among the most notable clippers were the China clippers, also called tea clippers, designed to ply the trade routes between Europe and the East Indies. The last example of these still in reasonable condition is Cutty Sark, preserved in dry dock at Greenwich, United Kingdom. Damaged by fire on 21 May 2007 while undergoing conservation, the ship was permanently elevated 3.0 m above the dry dock floor in 2010 as part of a plan for long-term preservation. Clippers were built for seasonal trades such as tea, where an early cargo was more valuable, or for passenger routes. One passenger ship survives, the City of Adelaide designed by William Pile of Sunderland. The fast ships were ideally suited to low-volume, high-profit goods, such as tea, opium, spices, people, and mail. The return could be spectacular. The Challenger returned from Shanghai with "the most valuable cargo of tea and silk ever to be laden in one bottom". Competition among the clippers was public and fierce, with their times recorded in the newspapers. The last China clippers had peak speeds over , but their average speeds over a whole voyage were substantially less. The joint winner of the Great Tea Race of 1866 logged about 15,800 nautical miles on a 99-day trip. This gives an average speed slightly over . The key to a fast passage for a tea clipper was getting across the China Sea against the monsoon winds that prevailed when the first tea crop of the season was ready. These difficult sailing conditions (light and/or contrary winds) dictated the design of tea clippers. The US clippers were designed for the strong winds encountered on their route around Cape Horn. Donald McKay's Sovereign of the Seas reported the highest speed ever achieved by a sailing ship of the era, , made while running her easting down to Australia in 1854. (John Griffiths' first clipper, the Rainbow, had a top speed of 14 knots.) Eleven other instances are reported of a ship's logging or over. Ten of these were recorded by American clippers. Besides the breath-taking day's run of the Champion of the Seas, 13 other cases are known of a ship's sailing over in 24 hours. With few exceptions, though, all the port-to-port sailing records are held by the American clippers. The 24-hour record of the Champion of the Seas, set in 1854, was not broken until 1984 (by a multihull), or 2001 (by another monohull). Decline The American clippers sailing from the East Coast to the California goldfields were working in a booming market. Freight rates were high everywhere in the first years of the 1850s. This started to fade in late 1853. The ports of California and Australia reported that they were overstocked with goods that had been shipped earlier in the year. This gave an accelerating fall in freight rates that was halted, however, by the start of the Crimean War in March 1854, as many ships were now being chartered by the French and British governments. The end of the Crimean War in April 1856 released all this capacity back on the world shipping marketsthe result being a severe slump. The next year had the Panic of 1857, with effects on both sides of the Atlantic. The United States was just starting to recover from this in 1861 when the American Civil War started, causing significant disruption to trade in both Union and Confederate states. As the economic situation deteriorated in 1853, American shipowners either did not order new vessels, or specified an ordinary clipper or a medium clipper instead of an extreme clipper. No extreme clipper was launched in an American shipyard after the end of 1854 and only a few medium clippers after 1860. By contrast, British trade recovered well at the end of the 1850s. Tea clippers had continued to be launched during the depressed years, apparently little affected by the economic downturn. The long-distance route to China was not realistically challenged by steamships in the early part of the 1860s. No true steamer (as opposed to an auxiliary steamship) had the fuel efficiency to carry sufficient cargo to make a profitable voyage. The auxiliary steamships struggled to make any profit. The situation changed in 1866 when the Alfred Holt-designed and owned SS Agamemnon made her first voyage to China. Holt had persuaded the Board of Trade to allow higher steam pressures in British merchant vessels. Running at 60 psi instead of the previously permitted 25 psi, and using an efficient compound engine, Agamemnon had the fuel efficiency to steam at 10 knots to China and back, with coaling stops at Mauritius on the outward and return legscrucially carrying sufficient cargo to make a profit. In 1869, the Suez Canal opened, giving steamships a route about shorter than that taken by sailing ships round the Cape of Good Hope. Despite initial conservatism by tea merchants, by 1871, tea clippers found strong competition from steamers in the tea ports of China. A typical passage time back to London for a steamer was 58 days, while the fastest clippers could occasionally make the trip in less than 100 days; the average was 123 days in the 1867–68 tea season. The freight rate for a steamer in 1871 was roughly double that paid to a sailing vessel. Some clipper owners were severely caught out by this; several extreme clippers had been launched in 1869, including Cutty Sark, Norman Court and Caliph. Surviving ships Of the many clipper ships built during the mid-19th century, only two are known to survive. The only intact survivor is Cutty Sark, which was preserved as a museum ship in 1954 at Greenwich for public display. The other known survivor is City of Adelaide; unlike Cutty Sark, she was reduced to a hulk over the years. She eventually sank at her moorings in 1991, but was raised the following year, and remained on dry land for years. Adelaide (or S.V. Carrick) is the older of the two survivors, and was transported to Australia for conservation. In popular culture The clipper legacy appears in collectible cards and in the name of a basketball team. Sailing cards Departures of clipper ships, mostly from New York and Boston to San Francisco, were advertised by clipper-ship sailing cards. These cards, slightly larger than today's postcards, were produced by letterpress and wood engraving on coated card stock. Most clipper cards were printed in the 1850s and 1860s, and represented the first pronounced use of color in American advertising art. Perhaps 3,500 cards survive. With their rarity and importance as artifacts of nautical, Western, and printing history, clipper cards are valued by both private collectors and institutions. Basketball team The Los Angeles Clippers of the National Basketball Association take their name from the type of ship. After the Buffalo Braves moved to San Diego, California in 1978, a contest was held to choose a new name. The winning name highlighted the city's connection with the clippers that frequented San Diego Bay. The team retained the name in its 1984 move to Los Angeles. Airliners The airline Pan Am named its aircraft beginning with the word 'Clipper' and used Clipper as its callsign. This was intended to evoke an image of speed and glamour.
Technology
Naval transport
null
7463
https://en.wikipedia.org/wiki/Cold%20fusion
Cold fusion
Cold fusion is a hypothesized type of nuclear reaction that would occur at, or near, room temperature. It would contrast starkly with the "hot" fusion that is known to take place naturally within stars and artificially in hydrogen bombs and prototype fusion reactors under immense pressure and at temperatures of millions of degrees, and be distinguished from muon-catalyzed fusion. There is currently no accepted theoretical model that would allow cold fusion to occur. In 1989, two electrochemists at the University of Utah, Martin Fleischmann and Stanley Pons, reported that their apparatus had produced anomalous heat ("excess heat") of a magnitude they asserted would defy explanation except in terms of nuclear processes. They further reported measuring small amounts of nuclear reaction byproducts, including neutrons and tritium. The small tabletop experiment involved electrolysis of heavy water on the surface of a palladium (Pd) electrode. The reported results received wide media attention and raised hopes of a cheap and abundant source of energy. Many scientists tried to replicate the experiment with the few details available. Expectations diminished as a result of numerous failed replications, the retraction of several previously reported positive replications, the identification of methodological flaws and experimental errors in the original study, and, ultimately, the confirmation that Fleischmann and Pons had not observed the expected nuclear reaction byproducts. By late 1989, most scientists considered cold fusion claims dead, and cold fusion subsequently gained a reputation as pathological science. In 1989 the United States Department of Energy (DOE) concluded that the reported results of excess heat did not present convincing evidence of a useful source of energy and decided against allocating funding specifically for cold fusion. A second DOE review in 2004, which looked at new research, reached similar conclusions and did not result in DOE funding of cold fusion. Presently, since articles about cold fusion are rarely published in peer-reviewed mainstream scientific journals, they do not attract the level of scrutiny expected for mainstream scientific publications. Nevertheless, some interest in cold fusion has continued through the decades—for example, a Google-funded failed replication attempt was published in a 2019 issue of Nature. A small community of researchers continues to investigate it, often under the alternative designations low-energy nuclear reactions (LENR) or condensed matter nuclear science (CMNS). History Nuclear fusion is normally understood to occur at temperatures in the tens of millions of degrees. This is called "thermonuclear fusion". Since the 1920s, there has been speculation that nuclear fusion might be possible at much lower temperatures by catalytically fusing hydrogen absorbed in a metal catalyst. In 1989, a claim by Stanley Pons and Martin Fleischmann (then one of the world's leading electrochemists) that such cold fusion had been observed caused a brief media sensation before the majority of scientists criticized their claim as incorrect after many found they could not replicate the excess heat. Since the initial announcement, cold fusion research has continued by a small community of researchers who believe that such reactions happen and hope to gain wider recognition for their experimental evidence. Early research The ability of palladium to absorb hydrogen was recognized as early as the nineteenth century by Thomas Graham. In the late 1920s, two Austrian-born scientists, Friedrich Paneth and Kurt Peters, originally reported the transformation of hydrogen into helium by nuclear catalysis when hydrogen was absorbed by finely divided palladium at room temperature. However, the authors later retracted that report, saying that the helium they measured was due to background from the air. In 1927, Swedish scientist John Tandberg reported that he had fused hydrogen into helium in an electrolytic cell with palladium electrodes. On the basis of his work, he applied for a Swedish patent for "a method to produce helium and useful reaction energy". Due to Paneth and Peters's retraction and his inability to explain the physical process, his patent application was denied. After deuterium was discovered in 1932, Tandberg continued his experiments with heavy water. The final experiments made by Tandberg with heavy water were similar to the original experiment by Fleischmann and Pons. Fleischmann and Pons were not aware of Tandberg's work. The term "cold fusion" was used as early as 1956 in an article in The New York Times about Luis Alvarez's work on muon-catalyzed fusion. Paul Palmer and then Steven Jones of Brigham Young University used the term "cold fusion" in 1986 in an investigation of "geo-fusion", the possible existence of fusion involving hydrogen isotopes in a planetary core. In his original paper on this subject with Clinton Van Siclen, submitted in 1985, Jones had coined the term "piezonuclear fusion". Fleischmann–Pons experiment The most famous cold fusion claims were made by Stanley Pons and Martin Fleischmann in 1989. After a brief period of interest by the wider scientific community, their reports were called into question by nuclear physicists. Pons and Fleischmann never retracted their claims, but moved their research program from the US to France after the controversy erupted. Events preceding announcement Martin Fleischmann of the University of Southampton and Stanley Pons of the University of Utah hypothesized that the high compression ratio and mobility of deuterium that could be achieved within palladium metal using electrolysis might result in nuclear fusion. To investigate, they conducted electrolysis experiments using a palladium cathode and heavy water within a calorimeter, an insulated vessel designed to measure process heat. Current was applied continuously for many weeks, with the heavy water being renewed at intervals. Some deuterium was thought to be accumulating within the cathode, but most was allowed to bubble out of the cell, joining oxygen produced at the anode. For most of the time, the power input to the cell was equal to the calculated power leaving the cell within measurement accuracy, and the cell temperature was stable at around 30 °C. But then, at some point (in some of the experiments), the temperature rose suddenly to about 50 °C without changes in the input power. These high temperature phases would last for two days or more and would repeat several times in any given experiment once they had occurred. The calculated power leaving the cell was significantly higher than the input power during these high temperature phases. Eventually the high temperature phases would no longer occur within a particular cell. In 1988, Fleischmann and Pons applied to the United States Department of Energy for funding towards a larger series of experiments. Up to this point they had been funding their experiments using a small device built with $100,000 out-of-pocket. The grant proposal was turned over for peer review, and one of the reviewers was Steven Jones of Brigham Young University. Jones had worked for some time on muon-catalyzed fusion, a known method of inducing nuclear fusion without high temperatures, and had written an article on the topic entitled "Cold nuclear fusion" that had been published in Scientific American in July 1987. Fleischmann and Pons and co-workers met with Jones and co-workers on occasion in Utah to share research and techniques. During this time, Fleischmann and Pons described their experiments as generating considerable "excess energy", in the sense that it could not be explained by chemical reactions alone. They felt that such a discovery could bear significant commercial value and would be entitled to patent protection. Jones, however, was measuring neutron flux, which was not of commercial interest. To avoid future problems, the teams appeared to agree to publish their results simultaneously, though their accounts of their 6 March meeting differ. Announcement In mid-March 1989, both research teams were ready to publish their findings, and Fleischmann and Jones had agreed to meet at an airport on 24 March to send their papers to Nature via FedEx. Fleischmann and Pons, however, pressured by the University of Utah, which wanted to establish priority on the discovery, broke their apparent agreement, disclosing their work at a press conference on 23 March (they claimed in the press release that it would be published in Nature but instead submitted their paper to the Journal of Electroanalytical Chemistry). Jones, upset, faxed in his paper to Nature after the press conference. Fleischmann and Pons' announcement drew wide media attention, as well as attention from the scientific community. The 1986 discovery of high-temperature superconductivity had made scientists more open to revelations of unexpected but potentially momentous scientific results that could be replicated reliably even if they could not be explained by established theories. Many scientists were also reminded of the Mössbauer effect, a process involving nuclear transitions in a solid. Its discovery 30 years earlier had also been unexpected, though it was quickly replicated and explained within the existing physics framework. The announcement of a new purported clean source of energy came at a crucial time: adults still remembered the 1973 oil crisis and the problems caused by oil dependence, anthropogenic global warming was starting to become notorious, the anti-nuclear movement was labeling nuclear power plants as dangerous and getting them closed, people had in mind the consequences of strip mining, acid rain, the greenhouse effect and the Exxon Valdez oil spill, which happened the day after the announcement. In the press conference, Chase N. Peterson, Fleischmann and Pons, backed by the solidity of their scientific credentials, repeatedly assured the journalists that cold fusion would solve environmental problems, and would provide a limitless inexhaustible source of clean energy, using only seawater as fuel. They said the results had been confirmed dozens of times and they had no doubts about them. In the accompanying press release Fleischmann was quoted saying: "What we have done is to open the door of a new research area, our indications are that the discovery will be relatively easy to make into a usable technology for generating heat and power, but continued work is needed, first, to further understand the science and secondly, to determine its value to energy economics." Response and fallout Although the experimental protocol had not been published, physicists in several countries attempted, and failed, to replicate the excess heat phenomenon. The first paper submitted to Nature reproducing excess heat, although it passed peer review, was rejected because most similar experiments were negative and there were no theories that could explain a positive result; this paper was later accepted for publication by the journal Fusion Technology. Nathan Lewis, professor of chemistry at the California Institute of Technology, led one of the most ambitious validation efforts, trying many variations on the experiment without success, while CERN physicist Douglas R. O. Morrison said that "essentially all" attempts in Western Europe had failed. Even those reporting success had difficulty reproducing Fleischmann and Pons' results. On 10 April 1989, a group at Texas A&M University published results of excess heat and later that day a group at the Georgia Institute of Technology announced neutron production—the strongest replication announced up to that point due to the detection of neutrons and the reputation of the lab. On 12 April Pons was acclaimed at an ACS meeting. But Georgia Tech retracted their announcement on 13 April, explaining that their neutron detectors gave false positives when exposed to heat. Another attempt at independent replication, headed by Robert Huggins at Stanford University, which also reported early success with a light water control, became the only scientific support for cold fusion in 26 April US Congress hearings. But when he finally presented his results he reported an excess heat of only one degree Celsius, a result that could be explained by chemical differences between heavy and light water in the presence of lithium. He had not tried to measure any radiation and his research was derided by scientists who saw it later. For the next six weeks, competing claims, counterclaims, and suggested explanations kept what was referred to as "cold fusion" or "fusion confusion" in the news. In April 1989, Fleischmann and Pons published a "preliminary note" in the Journal of Electroanalytical Chemistry. This paper notably showed a gamma peak without its corresponding Compton edge, which indicated they had made a mistake in claiming evidence of fusion byproducts. Fleischmann and Pons replied to this critique, but the only thing left clear was that no gamma ray had been registered and that Fleischmann refused to recognize any mistakes in the data. A much longer paper published a year later went into details of calorimetry but did not include any nuclear measurements. Nevertheless, Fleischmann and Pons and a number of other researchers who found positive results remained convinced of their findings. The University of Utah asked Congress to provide $25 million to pursue the research, and Pons was scheduled to meet with representatives of President Bush in early May. On 30 April 1989, cold fusion was declared dead by The New York Times. The Times called it a circus the same day, and the Boston Herald attacked cold fusion the following day. On 1 May 1989, the American Physical Society held a session on cold fusion in Baltimore, including many reports of experiments that failed to produce evidence of cold fusion. At the end of the session, eight of the nine leading speakers stated that they considered the initial Fleischmann and Pons claim dead, with the ninth, Johann Rafelski, abstaining. Steven E. Koonin of Caltech called the Utah report a result of "the incompetence and delusion of Pons and Fleischmann," which was met with a standing ovation. Douglas R. O. Morrison, a physicist representing CERN, was the first to call the episode an example of pathological science. On 4 May, due to all this new criticism, the meetings with various representatives from Washington were cancelled. From 8 May, only the A&M tritium results kept cold fusion afloat. In July and November 1989, Nature published papers critical of cold fusion claims. Negative results were also published in several other scientific journals including Science, Physical Review Letters, and Physical Review C (nuclear physics). In August 1989, in spite of this trend, the state of Utah invested $4.5 million to create the National Cold Fusion Institute. The United States Department of Energy organized a special panel to review cold fusion theory and research. The panel issued its report in November 1989, concluding that results as of that date did not present convincing evidence that useful sources of energy would result from the phenomena attributed to cold fusion. The panel noted the large number of failures to replicate excess heat and the greater inconsistency of reports of nuclear reaction byproducts expected by established conjecture. Nuclear fusion of the type postulated would be inconsistent with current understanding and, if verified, would require established conjecture, perhaps even theory itself, to be extended in an unexpected way. The panel was against special funding for cold fusion research, but supported modest funding of "focused experiments within the general funding system". Cold fusion supporters continued to argue that the evidence for excess heat was strong, and in September 1990 the National Cold Fusion Institute listed 92 groups of researchers from 10 countries that had reported corroborating evidence of excess heat, but they refused to provide any evidence of their own arguing that it could endanger their patents. However, no further DOE nor NSF funding resulted from the panel's recommendation. By this point, however, academic consensus had moved decidedly toward labeling cold fusion as a kind of "pathological science". In March 1990, Michael H. Salamon, a physicist from the University of Utah, and nine co-authors reported negative results. University faculty were then "stunned" when a lawyer representing Pons and Fleischmann demanded the Salamon paper be retracted under threat of a lawsuit. The lawyer later apologized; Fleischmann defended the threat as a legitimate reaction to alleged bias displayed by cold-fusion critics. In early May 1990, one of the two A&M researchers, Kevin Wolf, acknowledged the possibility of spiking, but said that the most likely explanation was tritium contamination in the palladium electrodes or simply contamination due to sloppy work. In June 1990 an article in Science by science writer Gary Taubes destroyed the public credibility of the A&M tritium results when it accused its group leader John Bockris and one of his graduate students of spiking the cells with tritium. In October 1990 Wolf finally said that the results were explained by tritium contamination in the rods. An A&M cold fusion review panel found that the tritium evidence was not convincing and that, while they couldn't rule out spiking, contamination and measurements problems were more likely explanations, and Bockris never got support from his faculty to resume his research. On 30 June 1991, the National Cold Fusion Institute closed after it ran out of funds; it found no excess heat, and its reports of tritium production were met with indifference. On 1 January 1991, Pons left the University of Utah and went to Europe. In 1992, Pons and Fleischmann resumed research with Toyota Motor Corporation's IMRA lab in France. Fleischmann left for England in 1995, and the contract with Pons was not renewed in 1998 after spending $40 million with no tangible results. The IMRA laboratory stopped cold fusion research in 1998 after spending £12 million. Pons has made no public declarations since, and only Fleischmann continued giving talks and publishing papers. Mostly in the 1990s, several books were published that were critical of cold fusion research methods and the conduct of cold fusion researchers. Over the years, several books have appeared that defended them. Around 1998, the University of Utah had already dropped its research after spending over $1 million, and in the summer of 1997, Japan cut off research and closed its own lab after spending $20 million. Later research A 1991 review by a cold fusion proponent had calculated "about 600 scientists" were still conducting research. After 1991, cold fusion research only continued in relative obscurity, conducted by groups that had increasing difficulty securing public funding and keeping programs open. These small but committed groups of cold fusion researchers have continued to conduct experiments using Fleischmann and Pons electrolysis setups in spite of the rejection by the mainstream community. The Boston Globe estimated in 2004 that there were only 100 to 200 researchers working in the field, most suffering damage to their reputation and career. Since the main controversy over Pons and Fleischmann had ended, cold fusion research has been funded by private and small governmental scientific investment funds in the United States, Italy, Japan, and India. For example, it was reported in Nature, in May, 2019, that Google had spent approximately $10 million on cold fusion research. A group of scientists at well-known research labs (e.g., MIT, Lawrence Berkeley National Lab, and others) worked for several years to establish experimental protocols and measurement techniques in an effort to re-evaluate cold fusion to a high standard of scientific rigor. Their reported conclusion: no cold fusion. In 2021, following Nature's 2019 publication of anomalous findings that might only be explained by some localized fusion, scientists at the Naval Surface Warfare Center, Indian Head Division announced that they had assembled a group of scientists from the Navy, Army and National Institute of Standards and Technology to undertake a new, coordinated study. With few exceptions, researchers have had difficulty publishing in mainstream journals. The remaining researchers often term their field Low Energy Nuclear Reactions (LENR), Chemically Assisted Nuclear Reactions (CANR), Lattice Assisted Nuclear Reactions (LANR), Condensed Matter Nuclear Science (CMNS) or Lattice Enabled Nuclear Reactions; one of the reasons being to avoid the negative connotations associated with "cold fusion". The new names avoid making bold implications, like implying that fusion is actually occurring. The researchers who continue their investigations acknowledge that the flaws in the original announcement are the main cause of the subject's marginalization, and they complain of a chronic lack of funding and no possibilities of getting their work published in the highest impact journals. University researchers are often unwilling to investigate cold fusion because they would be ridiculed by their colleagues and their professional careers would be at risk. In 1994, David Goodstein, a professor of physics at Caltech, advocated increased attention from mainstream researchers and described cold fusion as: United States United States Navy researchers at the Space and Naval Warfare Systems Center (SPAWAR) in San Diego have been studying cold fusion since 1989. In 2002 they released a two-volume report, "Thermal and nuclear aspects of the Pd/D2O system", with a plea for funding. This and other published papers prompted a 2004 Department of Energy (DOE) review. 2004 DOE panel In August 2003, the U.S. Secretary of Energy, Spencer Abraham, ordered the DOE to organize a second review of the field. This was thanks to an April 2003 letter sent by MIT's Peter L. Hagelstein, and the publication of many new papers, including the Italian ENEA and other researchers in the 2003 International Cold Fusion Conference, and a two-volume book by U.S. SPAWAR in 2002. Cold fusion researchers were asked to present a review document of all the evidence since the 1989 review. The report was released in 2004. The reviewers were "split approximately evenly" on whether the experiments had produced energy in the form of heat, but "most reviewers, even those who accepted the evidence for excess power production, 'stated that the effects are not repeatable, the magnitude of the effect has not increased in over a decade of work, and that many of the reported experiments were not well documented'". In summary, reviewers found that cold fusion evidence was still not convincing 15 years later, and they did not recommend a federal research program. They only recommended that agencies consider funding individual well-thought studies in specific areas where research "could be helpful in resolving some of the controversies in the field". They summarized its conclusions thus: Cold fusion researchers placed a "rosier spin" on the report, noting that they were finally being treated like normal scientists, and that the report had increased interest in the field and caused "a huge upswing in interest in funding cold fusion research". However, in a 2009 BBC article on an American Chemical Society's meeting on cold fusion, particle physicist Frank Close was quoted stating that the problems that plagued the original cold fusion announcement were still happening: results from studies are still not being independently verified and inexplicable phenomena encountered are being labelled as "cold fusion" even if they are not, in order to attract the attention of journalists. In February 2012, millionaire Sidney Kimmel, convinced that cold fusion was worth investing in by a 19 April 2009 interview with physicist Robert Duncan on the US news show 60 Minutes, made a grant of $5.5 million to the University of Missouri to establish the Sidney Kimmel Institute for Nuclear Renaissance (SKINR). The grant was intended to support research into the interactions of hydrogen with palladium, nickel or platinum under extreme conditions. In March 2013 Graham K. Hubler, a nuclear physicist who worked for the Naval Research Laboratory for 40 years, was named director. One of the SKINR projects is to replicate a 1991 experiment in which a professor associated with the project, Mark Prelas, says bursts of millions of neutrons a second were recorded, which was stopped because "his research account had been frozen". He claims that the new experiment has already seen "neutron emissions at similar levels to the 1991 observation". In May 2016, the United States House Committee on Armed Services, in its report on the 2017 National Defense Authorization Act, directed the Secretary of Defense to "provide a briefing on the military utility of recent U.S. industrial base LENR advancements to the House Committee on Armed Services by September 22, 2016". Italy Since the Fleischmann and Pons announcement, the Italian national agency for new technologies, energy and sustainable economic development (ENEA) has funded Franco Scaramuzzi's research into whether excess heat can be measured from metals loaded with deuterium gas. Such research is distributed across ENEA departments, CNR laboratories, INFN, universities and industrial laboratories in Italy, where the group continues to try to achieve reliable reproducibility (i.e. getting the phenomenon to happen in every cell, and inside a certain frame of time). In 2006–2007, the ENEA started a research program which claimed to have found excess power of up to 500 percent, and in 2009, ENEA hosted the 15th cold fusion conference. Japan Between 1992 and 1997, Japan's Ministry of International Trade and Industry sponsored a "New Hydrogen Energy (NHE)" program of US$20 million to research cold fusion. Announcing the end of the program in 1997, the director and one-time proponent of cold fusion research Hideo Ikegami stated "We couldn't achieve what was first claimed in terms of cold fusion. (...) We can't find any reason to propose more money for the coming year or for the future." In 1999 the Japan C-F Research Society was established to promote the independent research into cold fusion that continued in Japan. The society holds annual meetings. Perhaps the most famous Japanese cold fusion researcher was Yoshiaki Arata, from Osaka University, who claimed in a demonstration to produce excess heat when deuterium gas was introduced into a cell containing a mixture of palladium and zirconium oxide, a claim supported by fellow Japanese researcher Akira Kitamura of Kobe University and Michael McKubre at SRI. India In the 1990s, India stopped its research in cold fusion at the Bhabha Atomic Research Centre because of the lack of consensus among mainstream scientists and the US denunciation of the research. Yet, in 2008, the National Institute of Advanced Studies recommended that the Indian government revive this research. Projects were commenced at Chennai's Indian Institute of Technology, the Bhabha Atomic Research Centre and the Indira Gandhi Centre for Atomic Research. However, there is still skepticism among scientists and, for all practical purposes, research has stalled since the 1990s. A special section in the Indian multidisciplinary journal Current Science published 33 cold fusion papers in 2015 by major cold fusion researchers including several Indian researchers. Reported results A cold fusion experiment usually includes: a metal, such as palladium or nickel, in bulk, thin films or powder; and deuterium, hydrogen, or both, in the form of water, gas or plasma. Electrolysis cells can be either open cell or closed cell. In open cell systems, the electrolysis products, which are gaseous, are allowed to leave the cell. In closed cell experiments, the products are captured, for example by catalytically recombining the products in a separate part of the experimental system. These experiments generally strive for a steady state condition, with the electrolyte being replaced periodically. There are also "heat-after-death" experiments, where the evolution of heat is monitored after the electric current is turned off. The most basic setup of a cold fusion cell consists of two electrodes submerged in a solution containing palladium and heavy water. The electrodes are then connected to a power source to transmit electricity from one electrode to the other through the solution. Even when anomalous heat is reported, it can take weeks for it to begin to appear—this is known as the "loading time," the time required to saturate the palladium electrode with hydrogen (see "Loading ratio" section). The Fleischmann and Pons early findings regarding helium, neutron radiation and tritium were never replicated satisfactorily, and its levels were too low for the claimed heat production and inconsistent with each other. Neutron radiation has been reported in cold fusion experiments at very low levels using different kinds of detectors, but levels were too low, close to background, and found too infrequently to provide useful information about possible nuclear processes. Excess heat and energy production An excess heat observation is based on an energy balance. Various sources of energy input and output are continuously measured. Under normal conditions, the energy input can be matched to the energy output to within experimental error. In experiments such as those run by Fleischmann and Pons, an electrolysis cell operating steadily at one temperature transitions to operating at a higher temperature with no increase in applied current. If the higher temperatures were real, and not an experimental artifact, the energy balance would show an unaccounted term. In the Fleischmann and Pons experiments, the rate of inferred excess heat generation was in the range of 10–20% of total input, though this could not be reliably replicated by most researchers. Researcher Nathan Lewis discovered that the excess heat in Fleischmann and Pons's original paper was not measured, but estimated from measurements that didn't have any excess heat. Unable to produce excess heat or neutrons, and with positive experiments being plagued by errors and giving disparate results, most researchers declared that heat production was not a real effect and ceased working on the experiments. In 1993, after their original report, Fleischmann reported "heat-after-death" experiments—where excess heat was measured after the electric current supplied to the electrolytic cell was turned off. This type of report has also become part of subsequent cold fusion claims. Helium, heavy elements, and neutrons Known instances of nuclear reactions, aside from producing energy, also produce nucleons and particles on readily observable ballistic trajectories. In support of their claim that nuclear reactions took place in their electrolytic cells, Fleischmann and Pons reported a neutron flux of 4,000 neutrons per second, as well as detection of tritium. The classical branching ratio for previously known fusion reactions that produce tritium would predict, with 1 watt of power, the production of 1012 neutrons per second, levels that would have been fatal to the researchers. In 2009, Mosier-Boss et al. reported what they called the first scientific report of highly energetic neutrons, using CR-39 plastic radiation detectors, but the claims cannot be validated without a quantitative analysis of neutrons. Several medium and heavy elements like calcium, titanium, chromium, manganese, iron, cobalt, copper and zinc have been reported as detected by several researchers, like Tadahiko Mizuno or George Miley. The report presented to the United States Department of Energy (DOE) in 2004 indicated that deuterium-loaded foils could be used to detect fusion reaction products and, although the reviewers found the evidence presented to them as inconclusive, they indicated that those experiments did not use state-of-the-art techniques. In response to doubts about the lack of nuclear products, cold fusion researchers have tried to capture and measure nuclear products correlated with excess heat. Considerable attention has been given to measuring 4He production. However, the reported levels are very near to background, so contamination by trace amounts of helium normally present in the air cannot be ruled out. In the report presented to the DOE in 2004, the reviewers' opinion was divided on the evidence for 4He, with the most negative reviews concluding that although the amounts detected were above background levels, they were very close to them and therefore could be caused by contamination from air. One of the main criticisms of cold fusion was that deuteron-deuteron fusion into helium was expected to result in the production of gamma rays—which were not observed and were not observed in subsequent cold fusion experiments. Cold fusion researchers have since claimed to find X-rays, helium, neutrons and nuclear transmutations. Some researchers also claim to have found them using only light water and nickel cathodes. The 2004 DOE panel expressed concerns about the poor quality of the theoretical framework cold fusion proponents presented to account for the lack of gamma rays. Proposed mechanisms Researchers in the field do not agree on a theory for cold fusion. One proposal considers that hydrogen and its isotopes can be absorbed in certain solids, including palladium hydride, at high densities. This creates a high partial pressure, reducing the average separation of hydrogen isotopes. However, the reduction in separation is not enough to create the fusion rates claimed in the original experiment, by a factor of ten. It was also proposed that a higher density of hydrogen inside the palladium and a lower potential barrier could raise the possibility of fusion at lower temperatures than expected from a simple application of Coulomb's law. Electron screening of the positive hydrogen nuclei by the negative electrons in the palladium lattice was suggested to the 2004 DOE commission, but the panel found the theoretical explanations not convincing and inconsistent with current physics theories. Criticism Criticism of cold fusion claims generally take one of two forms: either pointing out the theoretical implausibility that fusion reactions have occurred in electrolysis setups or criticizing the excess heat measurements as being spurious, erroneous, or due to poor methodology or controls. There are several reasons why known fusion reactions are an unlikely explanation for the excess heat and associated cold fusion claims. Repulsion forces Because nuclei are all positively charged, they strongly repel one another. Normally, in the absence of a catalyst such as a muon, very high kinetic energies are required to overcome this charged repulsion. Extrapolating from known fusion rates, the rate for uncatalyzed fusion at room-temperature energy would be 50 orders of magnitude lower than needed to account for the reported excess heat. In muon-catalyzed fusion there are more fusions because the presence of the muon causes deuterium nuclei to be 207 times closer than in ordinary deuterium gas. But deuterium nuclei inside a palladium lattice are further apart than in deuterium gas, and there should be fewer fusion reactions, not more. Paneth and Peters in the 1920s already knew that palladium can absorb up to 900 times its own volume of hydrogen gas, storing it at several thousands of times the atmospheric pressure. This led them to believe that they could increase the nuclear fusion rate by simply loading palladium rods with hydrogen gas. Tandberg then tried the same experiment but used electrolysis to make palladium absorb more deuterium and force the deuterium further together inside the rods, thus anticipating the main elements of Fleischmann and Pons' experiment. They all hoped that pairs of hydrogen nuclei would fuse together to form helium, which at the time was needed in Germany to fill zeppelins, but no evidence of helium or of increased fusion rate was ever found. This was also the belief of geologist Palmer, who convinced Steven Jones that the helium-3 occurring naturally in Earth perhaps came from fusion involving hydrogen isotopes inside catalysts like nickel and palladium. This led their team in 1986 to independently make the same experimental setup as Fleischmann and Pons (a palladium cathode submerged in heavy water, absorbing deuterium via electrolysis). Fleischmann and Pons had much the same belief, but they calculated the pressure to be of 1027 atmospheres, when cold fusion experiments achieve a loading ratio of only one to one, which has only between 10,000 and 20,000 atmospheres. John R. Huizenga says they had misinterpreted the Nernst equation, leading them to believe that there was enough pressure to bring deuterons so close to each other that there would be spontaneous fusions. Lack of expected reaction products Conventional deuteron fusion is a two-step process, in which an unstable high-energy intermediary is formed: H + H → He + 24 MeV Experiments have shown only three decay pathways for this excited-state nucleus, with the branching ratio showing the probability that any given intermediate follows a particular pathway. The products formed via these decay pathways are: He → n + He + 3.3 MeV (ratio=50%) He → p + H + 4.0 MeV (ratio=50%) He → He + γ + 24 MeV (ratio=10) Only about one in a million of the intermediaries take the third pathway, making its products very rare compared to the other paths. This result is consistent with the predictions of the Bohr model. If 1 watt (6.242 × 10 eV/s) were produced from ~2.2575 × 10 deuteron fusions per second, with the known branching ratios, the resulting neutrons and tritium (H) would be easily measured. Some researchers reported detecting He but without the expected neutron or tritium production; such a result would require branching ratios strongly favouring the third pathway, with the actual rates of the first two pathways lower by at least five orders of magnitude than observations from other experiments, directly contradicting both theoretically predicted and observed branching probabilities. Those reports of He production did not include detection of gamma rays, which would require the third pathway to have been changed somehow so that gamma rays are no longer emitted. The known rate of the decay process together with the inter-atomic spacing in a metallic crystal makes heat transfer of the 24 MeV excess energy into the host metal lattice prior to the intermediary's decay inexplicable by conventional understandings of momentum and energy transfer, and even then there would be measurable levels of radiation. Also, experiments indicate that the ratios of deuterium fusion remain constant at different energies. In general, pressure and chemical environment cause only small changes to fusion ratios. An early explanation invoked the Oppenheimer–Phillips process at low energies, but its magnitude was too small to explain the altered ratios. Setup of experiments Cold fusion setups utilize an input power source (to ostensibly provide activation energy), a platinum group electrode, a deuterium or hydrogen source, a calorimeter, and, at times, detectors to look for byproducts such as helium or neutrons. Critics have variously taken issue with each of these aspects and have asserted that there has not yet been a consistent reproduction of claimed cold fusion results in either energy output or byproducts. Some cold fusion researchers who claim that they can consistently measure an excess heat effect have argued that the apparent lack of reproducibility might be attributable to a lack of quality control in the electrode metal or the amount of hydrogen or deuterium loaded in the system. Critics have further taken issue with what they describe as mistakes or errors of interpretation that cold fusion researchers have made in calorimetry analyses and energy budgets. Reproducibility In 1989, after Fleischmann and Pons had made their claims, many research groups tried to reproduce the Fleischmann-Pons experiment, without success. A few other research groups, however, reported successful reproductions of cold fusion during this time. In July 1989, an Indian group from the Bhabha Atomic Research Centre (P. K. Iyengar and M. Srinivasan) and in October 1989, John Bockris' group from Texas A&M University reported on the creation of tritium. In December 1990, professor Richard Oriani of the University of Minnesota reported excess heat. Groups that did report successes found that some of their cells were producing the effect, while other cells that were built exactly the same and used the same materials were not producing the effect. Researchers that continued to work on the topic have claimed that over the years many successful replications have been made, but still have problems getting reliable replications. Reproducibility is one of the main principles of the scientific method, and its lack led most physicists to believe that the few positive reports could be attributed to experimental error. The DOE 2004 report said among its conclusions and recommendations: Loading ratio Cold fusion researchers (McKubre since 1994, ENEA in 2011) have speculated that a cell that is loaded with a deuterium/palladium ratio lower than 100% (or 1:1) will not produce excess heat. Since most of the negative replications from 1989 to 1990 did not report their ratios, this has been proposed as an explanation for failed reproducibility. This loading ratio is hard to obtain, and some batches of palladium never reach it because the pressure causes cracks in the palladium, allowing the deuterium to escape. Fleischmann and Pons never disclosed the deuterium/palladium ratio achieved in their cells; there are no longer any batches of the palladium used by Fleischmann and Pons (because the supplier now uses a different manufacturing process), and researchers still have problems finding batches of palladium that achieve heat production reliably. Misinterpretation of data Some research groups initially reported that they had replicated the Fleischmann and Pons results but later retracted their reports and offered an alternative explanation for their original positive results. A group at Georgia Tech found problems with their neutron detector, and Texas A&M discovered bad wiring in their thermometers. These retractions, combined with negative results from some famous laboratories, led most scientists to conclude, as early as 1989, that no positive result should be attributed to cold fusion. Calorimetry errors The calculation of excess heat in electrochemical cells involves certain assumptions. Errors in these assumptions have been offered as non-nuclear explanations for excess heat. One assumption made by Fleischmann and Pons is that the efficiency of electrolysis is nearly 100%, meaning nearly all the electricity applied to the cell resulted in electrolysis of water, with negligible resistive heating and substantially all the electrolysis product leaving the cell unchanged. This assumption gives the amount of energy expended converting liquid D2O into gaseous D2 and O2. The efficiency of electrolysis is less than one if hydrogen and oxygen recombine to a significant extent within the calorimeter. Several researchers have described potential mechanisms by which this process could occur and thereby account for excess heat in electrolysis experiments. Another assumption is that heat loss from the calorimeter maintains the same relationship with measured temperature as found when calibrating the calorimeter. This assumption ceases to be accurate if the temperature distribution within the cell becomes significantly altered from the condition under which calibration measurements were made. This can happen, for example, if fluid circulation within the cell becomes significantly altered. Recombination of hydrogen and oxygen within the calorimeter would also alter the heat distribution and invalidate the calibration. Publications The ISI identified cold fusion as the scientific topic with the largest number of published papers in 1989, of all scientific disciplines. The Nobel Laureate Julian Schwinger declared himself a supporter of cold fusion in the fall of 1989, after much of the response to the initial reports had turned negative. He tried to publish his theoretical paper "Cold Fusion: A Hypothesis" in Physical Review Letters, but the peer reviewers rejected it so harshly that he felt deeply insulted, and he resigned from the American Physical Society (publisher of PRL) in protest. The number of papers sharply declined after 1990 because of two simultaneous phenomena: first, scientists abandoned the field; second, journal editors declined to review new papers. Consequently, cold fusion fell off the ISI charts. Researchers who got negative results turned their backs on the field; those who continued to publish were simply ignored. A 1993 paper in Physics Letters A was the last paper published by Fleischmann, and "one of the last reports [by Fleischmann] to be formally challenged on technical grounds by a cold fusion skeptic." The Journal of Fusion Technology (FT) established a permanent feature in 1990 for cold fusion papers, publishing over a dozen papers per year and giving a mainstream outlet for cold fusion researchers. When editor-in-chief George H. Miley retired in 2001, the journal stopped accepting new cold fusion papers. This has been cited as an example of the importance of sympathetic influential individuals to the publication of cold fusion papers in certain journals. The decline of publications in cold fusion has been described as a "failed information epidemic". The sudden surge of supporters until roughly 50% of scientists support the theory, followed by a decline until there is only a very small number of supporters, has been described as a characteristic of pathological science. The lack of a shared set of unifying concepts and techniques has prevented the creation of a dense network of collaboration in the field; researchers perform efforts in their own and in disparate directions, making the transition to "normal" science more difficult. Cold fusion reports continued to be published in a few journals like Journal of Electroanalytical Chemistry and Il Nuovo Cimento. Some papers also appeared in Journal of Physical Chemistry, Physics Letters A, International Journal of Hydrogen Energy, and a number of Japanese and Russian journals of physics, chemistry, and engineering. Since 2005, Naturwissenschaften has published cold fusion papers; in 2009, the journal named a cold fusion researcher to its editorial board. In 2015 the Indian multidisciplinary journal Current Science published a special section devoted entirely to cold fusion related papers. In the 1990s, the groups that continued to research cold fusion and their supporters established (non-peer-reviewed) periodicals such as Fusion Facts, Cold Fusion Magazine, Infinite Energy Magazine and New Energy Times to cover developments in cold fusion and other fringe claims in energy production that were ignored in other venues. The internet has also become a major means of communication and self-publication for CF researchers. Conferences Cold fusion researchers were for many years unable to get papers accepted at scientific meetings, prompting the creation of their own conferences. The International Conference on Cold Fusion (ICCF) was first held in 1990 and has met every 12 to 18 months since. Attendees at some of the early conferences were described as offering no criticism to papers and presentations for fear of giving ammunition to external critics, thus allowing the proliferation of crackpots and hampering the conduct of serious science. Critics and skeptics stopped attending these conferences, with the notable exception of Douglas Morrison, who died in 2001. With the founding in 2004 of the International Society for Condensed Matter Nuclear Science (ISCMNS), the conference was renamed the International Conference on Condensed Matter Nuclear Science—for reasons that are detailed in the subsequent research section above—but reverted to the old name in 2008. Cold fusion research is often referenced by proponents as "low-energy nuclear reactions", or LENR, but according to sociologist Bart Simon the "cold fusion" label continues to serve a social function in creating a collective identity for the field. Since 2006, the American Physical Society (APS) has included cold fusion sessions at their semiannual meetings, clarifying that this does not imply a softening of skepticism. Since 2007, the American Chemical Society (ACS) meetings also include "invited symposium(s)" on cold fusion. An ACS program chair, Gopal Coimbatore, said that without a proper forum the matter would never be discussed and, "with the world facing an energy crisis, it is worth exploring all possibilities." On 22–25 March 2009, the American Chemical Society meeting included a four-day symposium in conjunction with the 20th anniversary of the announcement of cold fusion. Researchers working at the U.S. Navy's Space and Naval Warfare Systems Center (SPAWAR) reported detection of energetic neutrons using a heavy water electrolysis setup and a CR-39 detector, a result previously published in Naturwissenschaften. The authors claim that these neutrons are indicative of nuclear reactions. Without quantitative analysis of the number, energy, and timing of the neutrons and exclusion of other potential sources, this interpretation is unlikely to find acceptance by the wider scientific community. Patents Although details have not surfaced, it appears that the University of Utah forced the 23 March 1989 Fleischmann and Pons announcement to establish priority over the discovery and its patents before the joint publication with Jones. The Massachusetts Institute of Technology (MIT) announced on 12 April 1989 that it had applied for its own patents based on theoretical work of one of its researchers, Peter L. Hagelstein, who had been sending papers to journals from 5 to 12 April. An MIT graduate student applied for a patent but was reportedly rejected by the USPTO in part by the citation of the "negative" MIT Plasma Fusion Center's cold fusion experiment of 1989. On 2 December 1993 the University of Utah licensed all its cold fusion patents to ENECO, a new company created to profit from cold fusion discoveries, and in March 1998 it said that it would no longer defend its patents. The U.S. Patent and Trademark Office (USPTO) now rejects patents claiming cold fusion. Esther Kepplinger, the deputy commissioner of patents in 2004, said that this was done using the same argument as with perpetual motion machines: that they do not work. Patent applications are required to show that the invention is "useful", and this utility is dependent on the invention's ability to function. In general USPTO rejections on the sole grounds of the invention's being "inoperative" are rare, since such rejections need to demonstrate "proof of total incapacity", and cases where those rejections are upheld in a Federal Court are even rarer: nevertheless, in 2000, a rejection of a cold fusion patent was appealed in a Federal Court and it was upheld, in part on the grounds that the inventor was unable to establish the utility of the invention. A U.S. patent might still be granted when given a different name to disassociate it from cold fusion, though this strategy has had little success in the US: the same claims that need to be patented can identify it with cold fusion, and most of these patents cannot avoid mentioning Fleischmann and Pons' research due to legal constraints, thus alerting the patent reviewer that it is a cold-fusion-related patent. David Voss said in 1999 that some patents that closely resemble cold fusion processes, and that use materials used in cold fusion, have been granted by the USPTO. The inventor of three such patents had his applications initially rejected when they were reviewed by experts in nuclear science; but then he rewrote the patents to focus more on the electrochemical parts so they would be reviewed instead by experts in electrochemistry, who approved them. When asked about the resemblance to cold fusion, the patent holder said that it used nuclear processes involving "new nuclear physics" unrelated to cold fusion. Melvin Miles was granted in 2004 a patent for a cold fusion device, and in 2007 he described his efforts to remove all instances of "cold fusion" from the patent description to avoid having it rejected outright. At least one patent related to cold fusion has been granted by the European Patent Office. A patent only legally prevents others from using or benefiting from one's invention. However, the general public perceives a patent as a stamp of approval, and a holder of three cold fusion patents said the patents were very valuable and had helped in getting investments. Cultural references A 1990 Michael Winner film Bullseye!, starring Michael Caine and Roger Moore, referenced the Fleischmann and Pons experiment. The film – a comedy – concerned conmen trying to steal scientists' purported findings. However, the film had a poor reception, described as "appallingly unfunny". In Undead Science, sociologist Bart Simon gives some examples of cold fusion in popular culture, saying that some scientists use cold fusion as a synonym for outrageous claims made with no supporting proof, and courses of ethics in science give it as an example of pathological science. It has appeared as a joke in Murphy Brown and The Simpsons. It was adopted as a software product name Adobe ColdFusion and a brand of protein bars (Cold Fusion Foods). It has also appeared in advertising as a synonym for impossible science, for example a 1995 advertisement for Pepsi Max. The plot of The Saint, a 1997 action-adventure film, parallels the story of Fleischmann and Pons, although with a different ending. In Undead Science, Simon posits that film might have affected the public perception of cold fusion, pushing it further into the science fiction realm. Similarly, the tenth episode of 2000 science fiction TV drama Life Force ("Paradise Island") is also based around cold fusion, specifically the efforts of eccentric scientist Hepzibah McKinley (Amanda Walker), who is convinced she has perfected it based on her father's incomplete research into the subject. The episode explores its potential benefits and viability within the ongoing post-apocalyptic global warming scenario of the series. In the 2023 video game Atomic Heart, cold fusion is responsible for nearly all of the technological advances.
Physical sciences
Nuclear physics
Physics
7466
https://en.wikipedia.org/wiki/Coal%20tar
Coal tar
Coal tar is a thick dark liquid which is a by-product of the production of coke and coal gas from coal. It is a type of creosote. It has both medical and industrial uses. Medicinally it is a topical medication applied to skin to treat psoriasis and seborrheic dermatitis (dandruff). It may be used in combination with ultraviolet light therapy. Industrially it is a railroad tie preservative and used in the surfacing of roads. Coal tar was listed as a known human carcinogen in the first Report on Carcinogens from the U.S. Federal Government, issued in 1980. Coal tar was discovered circa 1665 and used for medical purposes as early as the 1800s. Circa 1850, the discovery that it could be used as the main raw material for the synthesis of dyes engendered an entire industry. It is on the World Health Organization's List of Essential Medicines. Coal tar is available as a generic medication and over the counter. Side effects include skin irritation, sun sensitivity, allergic reactions, and skin discoloration. It is unclear if use during pregnancy is safe for the baby and use during breastfeeding is not typically recommended. The exact mechanism of action is unknown. It is a complex mixture of phenols, polycyclic aromatic hydrocarbons (PAHs), and heterocyclic compounds. It demonstrates antifungal, anti-inflammatory, anti-itch, and antiparasitic properties. Composition Coal tar is produced through thermal destruction (pyrolysis) of coal. Its composition varies with the process and type of coal used – lignite, bituminous or anthracite. Coal tar is a mixture of approximately 10,000 chemicals, of which only about 50% have been identified. Most of the chemical compounds are polycyclic aromatic hydrocarbon: polycyclic aromatic hydrocarbons (4-rings: chrysene, fluoranthene, pyrene, triphenylene, naphthacene, benzanthracene, 5-rings: picene, benzo[a]pyrene, benzo[e]pyrene, benzofluoranthenes, perylene, 6-rings: dibenzopyrenes, dibenzofluoranthenes, benzoperylenes, 7-rings: coronene) methylated and polymethylated derivatives, mono- and polyhydroxylated derivatives, and heterocyclic compounds. Others: benzene, toluene, xylenes, cumenes, coumarone, indene, benzofuran, naphthalene and methyl-naphthalenes, acenaphthene, fluorene, phenol, cresols, pyridine, picolines, phenanthracene, carbazole, quinolines, fluoranthene. Many of these constituents are known carcinogens. Derivatives Various phenolic coal tar derivatives have analgesic (pain-killer) properties. These included acetanilide, phenacetin, and paracetamol aka acetaminophen. Paracetamol may be the only coal-tar derived analgesic still in use today. Industrial phenol is now usually synthesized from crude oil rather than coal tar. Coal tar derivatives are contra-indicated for people with the inherited red cell blood disorder glucose-6-phosphate dehydrogenase deficiency (G6PD deficiency), as they can cause oxidative stress leading to red blood cell breakdown. Mechanism of action The exact mechanism of action is unknown. Coal tar is a complex mixture of phenols, polycyclic aromatic hydrocarbons (PAHs), and heterocyclic compounds. It is a keratolytic agent, which reduces the growth rate of skin cells and softens the skin's keratin. Uses Medicinal Coal tar is on the World Health Organization's List of Essential Medicines, the most effective and safe medicines needed in a health system. Coal tar is generally available as a generic medication and over the counter. Coal tar is used in medicated shampoo, soap and ointment. It demonstrates antifungal, anti-inflammatory, anti-itch, and antiparasitic properties. It may be applied topically as a treatment for dandruff and psoriasis, and to kill and repel head lice. It may be used in combination with ultraviolet light therapy. Coal tar may be used in two forms: crude coal tar () or a coal tar solution () also known as liquor carbonis detergens (LCD). Named brands include Denorex, Balnetar, Psoriasin, Tegrin, T/Gel, and Neutar. When used in the extemporaneous preparation of topical medications, it is supplied in the form of coal tar topical solution USP, which consists of a 20% w/v solution of coal tar in alcohol, with an additional 5% w/v of polysorbate 80 USP; this must then be diluted in an ointment base, such as petrolatum. Construction Coal tar was a component of the first sealed roads. In its original development by Edgar Purnell Hooley, tarmac was tar covered with granite chips. Later the filler used was industrial slag. Today, petroleum derived binders and sealers are more commonly used. These sealers are used to extend the life and reduce maintenance cost associated with asphalt pavements, primarily in asphalt road paving, car parks and walkways. Coal tar is incorporated into some parking-lot sealcoat products used to protect the structural integrity of the underlying pavement. Sealcoat products that are coal-tar based typically contain 20 to 35 percent coal-tar pitch. Research shows it is used throughout the United States of America, however several areas have banned its use in sealcoat products, including the District of Columbia; the city of Austin, Texas; Dane County, Wisconsin; the state of Washington; and several municipalities in Minnesota and others. Industry In modern times, coal tar is mostly traded as fuel and an application for tar, such as roofing. The total value of the trade in coal tar is around US$20 billion each year. As a fuel. In the manufacture of paints, synthetic dyes (notably tartrazine/Yellow #5), and photographic materials. For heating or to fire boilers. Like most heavy oils, it must be heated before it will flow easily. As a source of carbon black. As a binder in manufacturing graphite; a considerable portion of the materials in "green blocks" is coke oven volatiles (COV). During the baking process of the green blocks as a part of commercial graphite production, most of the coal tar binders are vaporised and are generally burned in an incinerator to prevent release into the atmosphere, as COV and coal tar can be injurious to health. As a main component of the electrode paste used in electric arc furnaces. Coal tar pitch act as the binder for solid filler that can be either coke or calcined anthracite, forming electrode paste, also widely known as Söderberg electrode paste. As a feed stock for higher-value fractions, such as naphtha, creosote and pitch. In the coal gas era, companies distilled coal tar to separate these out, leading to the discovery of many industrial chemicals. Some British companies included: Bonnington Chemical Works British Tar Products Lancashire Tar Distillers Midland Tar Distillers Newton, Chambers & Company (owners of Izal brand disinfectant) Sadlers Chemicals Safety Side effects of coal tar products include skin irritation, sun sensitivity, allergic reactions, and skin discoloration. It is unclear if use during pregnancy is safe for the baby and use during breastfeeding is not typically recommended. According to the National Psoriasis Foundation, coal tar is a valuable, safe and inexpensive treatment option for millions of people with psoriasis and other scalp or skin conditions. According to the FDA, coal tar concentrations between 0.5% and 5% are considered safe and effective for psoriasis. Cancer Long-term, consistent exposure to coal tar likely increases the risk of non-melanoma skin cancers. Evidence is inconclusive whether medical coal tar, which does not remain on the skin for the long periods seen in occupational exposure, causes cancer, because there is insufficient data to make a judgment. While coal tar consistently causes cancer in cohorts of workers with chronic occupational exposure, animal models, and mechanistic studies, the data on short-term use as medicine in humans has so far failed to show any consistently significant increase in rates of cancer. Coal tar contains many polycyclic aromatic hydrocarbons, and it is believed that their metabolites bind to DNA, damaging it. The PAHs found in coal tar and air pollution induce immunosenescence and cytotoxicity in epidermal cells. It's possible that the skin can repair itself from this damage after short-term exposure to PAHs but not after long-term exposure. Long-term skin exposure to these compounds can produce "tar warts", which can progress to squamous cell carcinoma. Coal tar was one of the first chemical substances proven to cause cancer from occupational exposure, during research in 1775 on the cause of chimney sweeps' carcinoma. Modern studies have shown that working with coal tar pitch, such as during the paving of roads or when working on roofs, increases the risk of cancer. The International Agency for Research on Cancer lists coal tars as Group 1 carcinogens, meaning they directly cause cancer. The U.S. Department of Health and Human Services lists coal tars as known human carcinogens. In response to public health concerns regarding the carcinogenicity of PAHs some municipalities, such as the city of Milwaukee, have banned the use of common coal tar-based road and driveway sealants citing concerns of elevated PAH content in groundwater. Other Coal tar causes increased sensitivity to sunlight, so skin treated with topical coal tar preparations should be protected from sunlight. The residue from the distillation of high-temperature coal tar, primarily a complex mixture of three or more membered condensed ring aromatic hydrocarbons, was listed on 13 January 2010 as a substance of very high concern by the European Chemicals Agency. Regulation Exposure to coal tar pitch volatiles can occur in the workplace by breathing, skin contact, or eye contact. The Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit) to 0.2 mg/m3 benzene-soluble fraction over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.1 mg/m3 cyclohexane-extractable fraction over an 8-hour workday. At levels of 80 mg/m3, coal tar pitch volatiles are immediately dangerous to life and health. When used as a medication in the United States, coal tar preparations are considered over-the-counter drug pharmaceuticals and are subject to regulation by the Food and Drug Administration (FDA).
Physical sciences
Hydrocarbons
Chemistry
7480
https://en.wikipedia.org/wiki/Cross%20section%20%28physics%29
Cross section (physics)
In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process. When two discrete particles interact in classical physics, their mutual cross section is the area transverse to their relative motion within which they must meet in order to scatter from each other. If the particles are hard inelastic spheres that interact only upon contact, their scattering cross section is related to their geometric size. If the particles interact through some action-at-a-distance force, such as electromagnetism or gravity, their scattering cross section is generally larger than their geometric size. When a cross section is specified as the differential limit of a function of some final-state variable, such as particle angle or energy, it is called a differential cross section (see detailed discussion below). When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section or integrated total cross section. For example, in Rayleigh scattering, the intensity scattered at the forward and backward angles is greater than the intensity scattered sideways, so the forward differential scattering cross section is greater than the perpendicular differential cross section, and by adding all of the infinitesimal cross sections over the whole range of angles with integral calculus, we can find the total cross section. Scattering cross sections may be defined in nuclear, atomic, and particle physics for collisions of accelerated beams of one type of particle with targets (either stationary or moving) of a second type of particle. The probability for any given reaction to occur is in proportion to its cross section. Thus, specifying the cross section for a given reaction is a proxy for stating the probability that a given scattering process will occur. The measured reaction rate of a given process depends strongly on experimental variables such as the density of the target material, the intensity of the beam, the detection efficiency of the apparatus, or the angle setting of the detection apparatus. However, these quantities can be factored away, allowing measurement of the underlying two-particle collisional cross section. Differential and total scattering cross sections are among the most important measurable quantities in nuclear, atomic, and particle physics. With light scattering off of a particle, the cross section specifies the amount of optical power scattered from light of a given irradiance (power per area). Although the cross section has the same units as area, the cross section may not necessarily correspond to the actual physical size of the target given by other forms of measurement. It is not uncommon for the actual cross-sectional area of a scattering object to be much larger or smaller than the cross section relative to some physical process. For example, plasmonic nanoparticles can have light scattering cross sections for particular frequencies that are much larger than their actual cross-sectional areas. Collision among gas particles In a gas of finite-sized particles there are collisions among particles that depend on their cross-sectional size. The average distance that a particle travels between collisions depends on the density of gas particles. These quantities are related by where is the cross section of a two-particle collision (SI unit: m2), is the mean free path between collisions (SI unit: m), is the number density of the target particles (SI unit: m−3). If the particles in the gas can be treated as hard spheres of radius that interact by direct contact, as illustrated in Figure 1, then the effective cross section for the collision of a pair is If the particles in the gas interact by a force with a larger range than their physical size, then the cross section is a larger effective area that may depend on a variety of variables such as the energy of the particles. Cross sections can be computed for atomic collisions but also are used in the subatomic realm. For example, in nuclear physics a "gas" of low-energy neutrons collides with nuclei in a reactor or other nuclear device, with a cross section that is energy-dependent and hence also with well-defined mean free path between collisions. Attenuation of a beam of particles If a beam of particles enters a thin layer of material of thickness , the flux of the beam will decrease by according to where is the total cross section of all events, including scattering, absorption, or transformation to another species. The volumetric number density of scattering centers is designated by . Solving this equation exhibits the exponential attenuation of the beam intensity: where is the initial flux, and is the total thickness of the material. For light, this is called the Beer–Lambert law. Differential cross section Consider a classical measurement where a single particle is scattered off a single stationary target particle. Conventionally, a spherical coordinate system is used, with the target placed at the origin and the axis of this coordinate system aligned with the incident beam. The angle is the scattering angle, measured between the incident beam and the scattered beam, and the is the azimuthal angle. The impact parameter is the perpendicular offset of the trajectory of the incoming particle, and the outgoing particle emerges at an angle . For a given interaction (coulombic, magnetic, gravitational, contact, etc.), the impact parameter and the scattering angle have a definite one-to-one functional dependence on each other. Generally the impact parameter can neither be controlled nor measured from event to event and is assumed to take all possible values when averaging over many scattering events. The differential size of the cross section is the area element in the plane of the impact parameter, i.e. . The differential angular range of the scattered particle at angle is the solid angle element . The differential cross section is the quotient of these quantities, . It is a function of the scattering angle (and therefore also the impact parameter), plus other observables such as the momentum of the incoming particle. The differential cross section is always taken to be positive, even though larger impact parameters generally produce less deflection. In cylindrically symmetric situations (about the beam axis), the azimuthal angle is not changed by the scattering process, and the differential cross section can be written as . In situations where the scattering process is not azimuthally symmetric, such as when the beam or target particles possess magnetic moments oriented perpendicular to the beam axis, the differential cross section must also be expressed as a function of the azimuthal angle. For scattering of particles of incident flux off a stationary target consisting of many particles, the differential cross section at an angle is related to the flux of scattered particle detection in particles per unit time by Here is the finite angular size of the detector (SI unit: sr), is the number density of the target particles (SI unit: m−3), and is the thickness of the stationary target (SI unit: m). This formula assumes that the target is thin enough that each beam particle will interact with at most one target particle. The total cross section may be recovered by integrating the differential cross section over the full solid angle ( steradians): It is common to omit the "differential" qualifier when the type of cross section can be inferred from context. In this case, may be referred to as the integral cross section or total cross section. The latter term may be confusing in contexts where multiple events are involved, since "total" can also refer to the sum of cross sections over all events. The differential cross section is extremely useful quantity in many fields of physics, as measuring it can reveal a great amount of information about the internal structure of the target particles. For example, the differential cross section of Rutherford scattering provided strong evidence for the existence of the atomic nucleus. Instead of the solid angle, the momentum transfer may be used as the independent variable of differential cross sections. Differential cross sections in inelastic scattering contain resonance peaks that indicate the creation of metastable states and contain information about their energy and lifetime. Quantum scattering In the time-independent formalism of quantum scattering, the initial wave function (before scattering) is taken to be a plane wave with definite momentum : where and are the relative coordinates between the projectile and the target. The arrow indicates that this only describes the asymptotic behavior of the wave function when the projectile and target are too far apart for the interaction to have any effect. After scattering takes place it is expected that the wave function takes on the following asymptotic form: where is some function of the angular coordinates known as the scattering amplitude. This general form is valid for any short-ranged, energy-conserving interaction. It is not true for long-ranged interactions, so there are additional complications when dealing with electromagnetic interactions. The full wave function of the system behaves asymptotically as the sum The differential cross section is related to the scattering amplitude: This has the simple interpretation as the probability density for finding the scattered projectile at a given angle. A cross section is therefore a measure of the effective surface area seen by the impinging particles, and as such is expressed in units of area. The cross section of two particles (i.e. observed when the two particles are colliding with each other) is a measure of the interaction event between the two particles. The cross section is proportional to the probability that an interaction will occur; for example in a simple scattering experiment the number of particles scattered per unit of time (current of scattered particles ) depends only on the number of incident particles per unit of time (current of incident particles ), the characteristics of target (for example the number of particles per unit of surface ), and the type of interaction. For we have Relation to the S-matrix If the reduced masses and momenta of the colliding system are , and , before and after the collision respectively, the differential cross section is given by where the on-shell matrix is defined by in terms of the S-matrix. Here is the Dirac delta function. The computation of the S-matrix is the main goal of the scattering theory. Units Although the SI unit of total cross sections is m2, a smaller unit is usually used in practice. In nuclear and particle physics, the conventional unit is the barn b, where 1 b = 10−28 m2 = 100 fm2. Smaller prefixed units such as mb and μb are also widely used. Correspondingly, the differential cross section can be measured in units such as mb/sr. When the scattered radiation is visible light, it is conventional to measure the path length in centimetres. To avoid the need for conversion factors, the scattering cross section is expressed in cm2, and the number concentration in cm−3. The measurement of the scattering of visible light is known as nephelometry, and is effective for particles of 2–50 μm in diameter: as such, it is widely used in meteorology and in the measurement of atmospheric pollution. The scattering of X-rays can also be described in terms of scattering cross sections, in which case the square ångström is a convenient unit: 1 Å2 = 10−20 m2 = = 108 b. The sum of the scattering, photoelectric, and pair-production cross-sections (in barns) is charted as the "atomic attenuation coefficient" (narrow-beam), in barns. Scattering of light For light, as in other settings, the scattering cross section for particles is generally different from the geometrical cross section of the particle, and it depends upon the wavelength of light and the permittivity, shape, and size of the particle. The total amount of scattering in a sparse medium is proportional to the product of the scattering cross section and the number of particles present. In the interaction of light with particles, many processes occur, each with their own cross sections, including absorption, scattering, and photoluminescence. The sum of the absorption and scattering cross sections is sometimes referred to as the attenuation or extinction cross section. The total extinction cross section is related to the attenuation of the light intensity through the Beer–Lambert law, which says that attenuation is proportional to particle concentration: where is the attenuation at a given wavelength , is the particle concentration as a number density, and is the path length. The absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance : Combining the scattering and absorption cross sections in this manner is often necessitated by the inability to distinguish them experimentally, and much research effort has been put into developing models that allow them to be distinguished, the Kubelka-Munk theory being one of the most important in this area. Cross section and Mie theory Cross sections commonly calculated using Mie theory include efficiency coefficients for extinction , scattering , and Absorption cross sections. These are normalized by the geometrical cross sections of the particle as The cross section is defined by where is the energy flow through the surrounding surface, and is the intensity of the incident wave. For a plane wave the intensity is going to be , where is the impedance of the host medium. The main approach is based on the following. Firstly, we construct an imaginary sphere of radius (surface ) around the particle (the scatterer). The net rate of electromagnetic energy crosses the surface is where is the time averaged Poynting vector. If energy is absorbed within the sphere, otherwise energy is being created within the sphere. We will not consider this case here. If the host medium is non-absorbing, the energy must be absorbed by the particle. We decompose the total field into incident and scattered parts , and the same for the magnetic field . Thus, we can decompose into the three terms , where where , , and . All the field can be decomposed into the series of vector spherical harmonics (VSH). After that, all the integrals can be taken. In the case of a uniform sphere of radius , permittivity , and permeability , the problem has a precise solution. The scattering and extinction coefficients are Where . These are connected as Dipole approximation for the scattering cross section Let us assume that a particle supports only electric and magnetic dipole modes with polarizabilities and (here we use the notation of magnetic polarizability in the manner of Bekshaev et al. rather than the notation of Nieto-Vesperinas et al.) expressed through the Mie coefficients as Then the cross sections are given by and, finally, the electric and magnetic absorption cross sections are and For the case of a no-inside-gain particle, i.e. no energy is emitted by the particle internally (), we have a particular case of the Optical theorem Equality occurs for non-absorbing particles, i.e. for . Scattering of light on extended bodies In the context of scattering light on extended bodies, the scattering cross section, , describes the likelihood of light being scattered by a macroscopic particle. In general, the scattering cross section is different from the geometrical cross section of a particle, as it depends upon the wavelength of light and the permittivity in addition to the shape and size of the particle. The total amount of scattering in a sparse medium is determined by the product of the scattering cross section and the number of particles present. In terms of area, the total cross section () is the sum of the cross sections due to absorption, scattering, and luminescence: The total cross section is related to the absorbance of the light intensity through the Beer–Lambert law, which says that absorbance is proportional to concentration: , where is the absorbance at a given wavelength , is the concentration as a number density, and is the path length. The extinction or absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance : Relation to physical size There is no simple relationship between the scattering cross section and the physical size of the particles, as the scattering cross section depends on the wavelength of radiation used. This can be seen when looking at a halo surrounding the Moon on a decently foggy evening: Red light photons experience a larger cross sectional area of water droplets than photons of higher energy. The halo around the Moon thus has a perimeter of red light due to lower energy photons being scattering further from the center of the Moon. Photons from the rest of the visible spectrum are left within the center of the halo and perceived as white light. Meteorological range The scattering cross section is related to the meteorological range : The quantity is sometimes denoted , the scattering coefficient per unit length. Examples Elastic collision of two hard spheres The following equations apply to two hard spheres that undergo a perfectly elastic collision. Let and denote the radii of the scattering center and scattered sphere, respectively. The differential cross section is and the total cross section is In other words, the total scattering cross section is equal to the area of the circle (with radius ) within which the center of mass of the incoming sphere has to arrive for it to be deflected. Rutherford scattering In Rutherford scattering, an incident particle with charge and energy scatters off a fixed particle with charge . The differential cross section is where is the vacuum permittivity. The total cross section is infinite unless a cutoff for small scattering angles is applied. This is due to the long range of the Coulomb potential. Scattering from a 2D circular mirror The following example deals with a beam of light scattering off a circle with radius and a perfectly reflecting boundary. The beam consists of a uniform density of parallel rays, and the beam-circle interaction is modeled within the framework of geometric optics. Because the problem is genuinely two-dimensional, the cross section has unit of length (e.g., metre). Let be the angle between the light ray and the radius joining the reflection point of the ray with the center point of the mirror. Then the increase of the length element perpendicular to the beam is The reflection angle of this ray with respect to the incoming ray is , and the scattering angle is The differential relationship between incident and reflected intensity is The differential cross section is therefore () Its maximum at corresponds to backward scattering, and its minimum at corresponds to scattering from the edge of the circle directly forward. This expression confirms the intuitive expectations that the mirror circle acts like a diverging lens. The total cross section is equal to the diameter of the circle: Scattering from a 3D spherical mirror The result from the previous example can be used to solve the analogous problem in three dimensions, i.e., scattering from a perfectly reflecting sphere of radius . The plane perpendicular to the incoming light beam can be parameterized by cylindrical coordinates and . In any plane of the incoming and the reflected ray we can write (from the previous example): while the impact area element is In spherical coordinates, Together with the trigonometric identity we obtain The total cross section is
Physical sciences
Molecular physics
Physics
7489
https://en.wikipedia.org/wiki/Collation
Collation
Collation is the assembly of written information into a standard order. Many systems of collation are based on numerical order or alphabetical order, or extensions and combinations thereof. Collation is a fundamental element of most office filing systems, library catalogs, and reference books. Collation differs from classification in that the classes themselves are not necessarily ordered. However, even if the order of the classes is irrelevant, the identifiers of the classes may be members of an ordered set, allowing a sorting algorithm to arrange the items by class. Formally speaking, a collation method typically defines a total order on a set of possible identifiers, called sort keys, which consequently produces a total preorder on the set of items of information (items with the same identifier are not placed in any defined order). A collation algorithm such as the Unicode collation algorithm defines an order through the process of comparing two given character strings and deciding which should come before the other. When an order has been defined in this way, a sorting algorithm can be used to put a list of any number of items into that order. The main advantage of collation is that it makes it fast and easy for a user to find an element in the list, or to confirm that it is absent from the list. In automatic systems this can be done using a binary search algorithm or interpolation search; manual searching may be performed using a roughly similar procedure, though this will often be done unconsciously. Other advantages are that one can easily find the first or last elements on the list (most likely to be useful in the case of numerically sorted data), or elements in a given range (useful again in the case of numerical data, and also with alphabetically ordered data when one may be sure of only the first few letters of the sought item or items). Ordering Numerical and chronological Strings representing numbers may be sorted based on the values of the numbers that they represent. For example, "−4", "2.5", "10", "89", "30,000". Pure application of this method may provide only a partial ordering on the strings, since different strings can represent the same number (as with "2" and "2.0" or, when scientific notation is used, "2e3" and "2000"). A similar approach may be taken with strings representing dates or other items that can be ordered chronologically or in some other natural fashion. Alphabetical Alphabetical order is the basis for many systems of collation where items of information are identified by strings consisting principally of letters from an alphabet. The ordering of the strings relies on the existence of a standard ordering for the letters of the alphabet in question. (The system is not limited to alphabets in the strict technical sense; languages that use a syllabary or abugida, for example Cherokee, can use the same ordering principle provided there is a set ordering for the symbols used.) To decide which of two strings comes first in alphabetical order, initially their first letters are compared. The string whose first letter appears earlier in the alphabet comes first in alphabetical order. If the first letters are the same, then the second letters are compared, and so on, until the order is decided. (If one string runs out of letters to compare, then it is deemed to come first; for example, "cart" comes before "carthorse".) The result of arranging a set of strings in alphabetical order is that words with the same first letter are grouped together, and within such a group words with the same first two letters are grouped together, and so on. Capital letters are typically treated as equivalent to their corresponding lowercase letters. (For alternative treatments in computerized systems, see Automated collation, below.) Certain limitations, complications, and special conventions may apply when alphabetical order is used: When strings contain spaces or other word dividers, the decision must be taken whether to ignore these dividers or to treat them as symbols preceding all other letters of the alphabet. For example, if the first approach is taken then "car park" will come after "carbon" and "carp" (as it would if it were written "carpark"), whereas in the second approach "car park" will come before those two words. The first rule is used in many (but not all) dictionaries, the second in telephone directories (so that Wilson, Jim K appears with other people named Wilson, Jim and not after Wilson, Jimbo). Abbreviations may be treated as if they were spelt out in full. For example, names containing "St." (short for the English word Saint) are often ordered as if they were written out as "Saint". There is also a traditional convention in English that surnames beginning Mc and M are listed as if those prefixes were written Mac. Strings that represent personal names will often be listed by alphabetical order of surname, even if the given name comes first. For example, Juan Hernandes and Brian O'Leary should be sorted as "Hernandes, Juan" and "O'Leary, Brian" even if they are not written this way. Very common initial words, such as The in English, are often ignored for sorting purposes. So The Shining would be sorted as just "Shining" or "Shining, The". When some of the strings contain numerals (or other non-letter characters), various approaches are possible. Sometimes such characters are treated as if they came before or after all the letters of the alphabet. Another method is for numbers to be sorted alphabetically as they would be spelled: for example 1776 would be sorted as if spelled out "seventeen seventy-six", and as if spelled "vingt-quatre..." (French for "twenty-four"). When numerals or other symbols are used as special graphical forms of letters, as in 1337 for leet or Se7en for the movie title Seven, they may be sorted as if they were those letters. Languages have different conventions for treating modified letters and certain letter combinations. For example, in Spanish the letter ñ is treated as a basic letter following n, and the digraphs ch and ll were formerly (until 1994) treated as basic letters following c and l, although they are now alphabetized as two-letter combinations. A list of such conventions for various languages can be found at . In several languages the rules have changed over time, and so older dictionaries may use a different order than modern ones. Furthermore, collation may depend on use. For example, German dictionaries and telephone directories use different approaches. Root sorting Some Arabic dictionaries, such as Hans Wehr's bilingual A Dictionary of Modern Written Arabic, group and sort Arabic words by semitic root. For example, the words kitāba ( 'writing'), kitāb ( 'book'), kātib ( 'writer'), maktaba ( 'library'), maktab ( 'office'), maktūb ( 'fate,' or 'written'), are agglomerated under the triliteral root k-t-b (), which denotes 'writing'. Radical-and-stroke sorting
Technology
Software development: General
null
7512
https://en.wikipedia.org/wiki/Concentration
Concentration
In chemistry, concentration is the abundance of a constituent divided by the total volume of a mixture. Several types of mathematical description can be distinguished: mass concentration, molar concentration, number concentration, and volume concentration. The concentration can refer to any kind of chemical mixture, but most frequently refers to solutes and solvents in solutions. The molar (amount) concentration has variants, such as normal concentration and osmotic concentration. Dilution is reduction of concentration, e.g. by adding solvent to a solution. The verb to concentrate means to increase concentration, the opposite of dilute. Etymology Concentration-, concentratio, action or an act of coming together at a single place, bringing to a common center, was used in post-classical Latin in 1550 or earlier, similar terms attested in Italian (1589), Spanish (1589), English (1606), French (1632). Qualitative description Often in informal, non-technical language, concentration is described in a qualitative way, through the use of adjectives such as "dilute" for solutions of relatively low concentration and "concentrated" for solutions of relatively high concentration. To concentrate a solution, one must add more solute (for example, alcohol), or reduce the amount of solvent (for example, water). By contrast, to dilute a solution, one must add more solvent, or reduce the amount of solute. Unless two substances are miscible, there exists a concentration at which no further solute will dissolve in a solution. At this point, the solution is said to be saturated. If additional solute is added to a saturated solution, it will not dissolve, except in certain circumstances, when supersaturation may occur. Instead, phase separation will occur, leading to coexisting phases, either completely separated or mixed as a suspension. The point of saturation depends on many variables, such as ambient temperature and the precise chemical nature of the solvent and solute. Concentrations are often called levels, reflecting the mental schema of levels on the vertical axis of a graph, which can be high or low (for example, "high serum levels of bilirubin" are concentrations of bilirubin in the blood serum that are greater than normal). Quantitative notation There are four quantities that describe concentration: Mass concentration The mass concentration is defined as the mass of a constituent divided by the volume of the mixture : The SI unit is kg/m3 (equal to g/L). Molar concentration The molar concentration is defined as the amount of a constituent (in moles) divided by the volume of the mixture : The SI unit is mol/m3. However, more commonly the unit mol/L (= mol/dm3) is used. Number concentration The number concentration is defined as the number of entities of a constituent in a mixture divided by the volume of the mixture : The SI unit is 1/m3. Volume concentration The volume concentration (not to be confused with volume fraction) is defined as the volume of a constituent divided by the volume of the mixture : Being dimensionless, it is expressed as a number, e.g., 0.18 or 18%. There seems to be no standard notation in the English literature. The letter used here is normative in German literature (see Volumenkonzentration). Related quantities Several other quantities can be used to describe the composition of a mixture. These should not be called concentrations. Normality Normality is defined as the molar concentration divided by an equivalence factor . Since the definition of the equivalence factor depends on context (which reaction is being studied), the International Union of Pure and Applied Chemistry and National Institute of Standards and Technology discourage the use of normality. Molality The molality of a solution is defined as the amount of a constituent (in moles) divided by the mass of the solvent (not the mass of the solution): The SI unit for molality is mol/kg. Mole fraction The mole fraction is defined as the amount of a constituent (in moles) divided by the total amount of all constituents in a mixture : The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole fractions. Mole ratio The mole ratio is defined as the amount of a constituent divided by the total amount of all other constituents in a mixture: If is much smaller than , the mole ratio is almost identical to the mole fraction. The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole ratios. Mass fraction The mass fraction is the fraction of one substance with mass to the mass of the total mixture , defined as: The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass fractions. Mass ratio The mass ratio is defined as the mass of a constituent divided by the total mass of all other constituents in a mixture: If is much smaller than , the mass ratio is almost identical to the mass fraction. The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass ratios. Dependence on volume and temperature Concentration depends on the variation of the volume of the solution with temperature, due mainly to thermal expansion. Table of concentrations and related quantities
Physical sciences
Mixture
Chemistry
7519
https://en.wikipedia.org/wiki/Convolution
Convolution
In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (). The term convolution refers to both the resulting function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). Graphically, it expresses how the 'shape' of one function is modified by the other. Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution differs from cross-correlation () only in that either or is reflected about the y-axis in convolution; thus it is a cross-correlation of and , or and . For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator. Convolution has applications that include probability, statistics, acoustics, spectroscopy, signal processing and image processing, geophysics, engineering, physics, computer vision and differential equations. The convolution can be defined for functions on Euclidean space and other groups (as algebraic structures). For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 18 at .) A discrete convolution can be defined for functions on the set of integers. Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing. Computing the inverse of the convolution operation is known as deconvolution. Definition The convolution of and is written , denoting the operator with the symbol . It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform: An equivalent definition is (see commutativity): While the symbol is used above, it need not represent the time domain. At each , the convolution formula can be described as the area under the function weighted by the function shifted by the amount . As changes, the weighting function emphasizes different parts of the input function ; If is a positive value, then is equal to that slides or is shifted along the -axis toward the right (toward ) by the amount of , while if is a negative value, then is equal to that slides or is shifted toward the left (toward ) by the amount of . For functions , supported on only (i.e., zero for negative arguments), the integration limits can be truncated, resulting in: For the multi-dimensional formulation of convolution, see domain of definition (below). Notation A common engineering notational convention is: which has to be interpreted carefully to avoid confusion. For instance, is equivalent to , but is in fact equivalent to . Relations with other transforms Given two functions and with bilateral Laplace transforms (two-sided Laplace transform) and respectively, the convolution operation can be defined as the inverse Laplace transform of the product of and . More precisely, Let , then Note that is the bilateral Laplace transform of . A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform). The convolution operation also describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. Visual explanation Historical developments One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754. Also, an expression of the type: is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series, which is the last of 3 volumes of the encyclopedic series: , Chez Courcier, Paris, 1797–1800. Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known as Faltung (which means folding in German), composition product, superposition integral, and Carson's integral. Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses. The operation: is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913. Circular convolution When a function is periodic, with period , then for functions, , such that exists, the convolution is also periodic and identical to: where is an arbitrary choice. The summation is called a periodic summation of the function . When is a periodic summation of another function, , then is known as a circular or cyclic convolution of and . And if the periodic summation above is replaced by , the operation is called a periodic convolution of and . Discrete convolution For complex-valued functions and defined on the set of integers, the discrete convolution of and is given by: or equivalently (see commutativity) by: The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences. Thus when has finite support in the set (representing, for instance, a finite impulse response), a finite summation may be used: Circular discrete convolution When a function is periodic, with period then for functions, such that exists, the convolution is also periodic and identical to: The summation on is called a periodic summation of the function If is a periodic summation of another function, then is known as a circular convolution of and When the non-zero durations of both and are limited to the interval   reduces to these common forms: The notation for cyclic convolution denotes convolution over the cyclic group of integers modulo . Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm. Fast convolution algorithms In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques (; ). requires arithmetic operations per output value and operations for outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O( log ) complexity. The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the Schönhage–Strassen algorithm or the Mersenne transform, use fast Fourier transforms in other rings. The Winograd method is used as an alternative to the FFT. It significantly speeds up 1D, 2D, and 3D convolution. If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available. Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method. A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations. Domain of definition The convolution of two complex-valued functions on is itself a complex-valued function on , defined by: and is well-defined only if and decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in at infinity can be easily offset by sufficiently rapid decay in . The question of existence thus may involve different conditions on and : Compactly supported functions If and are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous . More generally, if either function (say ) is compactly supported and the other is locally integrable, then the convolution is well-defined and continuous. Convolution of and is also well defined when both functions are locally square integrable on and supported on an interval of the form (or both supported on ). Integrable functions The convolution of and exists if and are both Lebesgue integrable functions in (), and in this case is also integrable . This is a consequence of Tonelli's theorem. This is also true for functions in , under the discrete convolution, or more generally for the convolution on any group. Likewise, if ()  and  ()  where ,  then  (),  and In the particular case , this shows that is a Banach algebra under the convolution (and equality of the two sides holds if and are non-negative almost everywhere). More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable spaces. Specifically, if satisfy: then so that the convolution is a continuous bilinear mapping from to . The Young inequality for convolution is also true in other contexts (circle group, convolution on ). The preceding inequality is not sharp on the real line: when , there exists a constant such that: The optimal value of was discovered in 1975 and independently in 1976, see Brascamp–Lieb inequality. A stronger estimate is true provided : where is the weak norm. Convolution also defines a bilinear continuous map for , owing to the weak Young inequality: Functions of rapid decay In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if f and g both decay rapidly, then f∗g also decays rapidly. In particular, if f and g are rapidly decreasing functions, then so is the convolution f∗g. Combined with the fact that convolution commutes with differentiation (see #Properties), it follows that the class of Schwartz functions is closed under convolution . Distributions If f is a smooth function that is compactly supported and g is a distribution, then f∗g is a smooth function defined by More generally, it is possible to extend the definition of the convolution in a unique way with the same as f above, so that the associative law remains valid in the case where f is a distribution, and g a compactly supported distribution . Measures The convolution of any two Borel measures μ and ν of bounded variation is the measure defined by In particular, where is a measurable set and is the indicator function of . This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1 functions when μ and ν are absolutely continuous with respect to the Lebesgue measure. The convolution of measures also satisfies the following version of Young's inequality where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions. Properties Algebraic properties The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative associative algebra without identity . Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative associative algebras. Commutativity Proof: By definition: Changing the variable of integration to the result follows. Associativity Proof: This follows from using Fubini's theorem (i.e., double integrals can be evaluated as iterated integrals in either order). Distributivity Proof: This follows from linearity of the integral. Associativity with scalar multiplication for any real (or complex) number . Multiplicative identity No algebra of functions possesses an identity for the convolution. The lack of identity is typically not a major inconvenience, since most collections of functions on which the convolution is performed can be convolved with a delta distribution (a unitary impulse, centered at zero) or, at the very least (as is the case of L1) admit approximations to the identity. The linear space of compactly supported distributions does, however, admit an identity under the convolution. Specifically, where δ is the delta distribution. Inverse element Some distributions S have an inverse element S−1 for the convolution which then must satisfy from which an explicit formula for S−1 may be obtained.The set of invertible distributions forms an abelian group under the convolution. Complex conjugation Time reversal If    then   Proof (using convolution theorem): Relationship with differentiation Proof: Relationship with integration If and then Integration If f and g are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals: This follows from Fubini's theorem. The same result holds if f and g are only assumed to be nonnegative measurable functions, by Tonelli's theorem. Differentiation In the one-variable case, where is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative: A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as f and g are in total. These identities hold for example under the condition that f and g are absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence of Young's convolution inequality. For instance, when f is continuously differentiable with compact support, and g is an arbitrary locally integrable function, These identities also hold much more broadly in the sense of tempered distributions if one of f or g is a rapidly decreasing tempered distribution, a compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution. In the discrete case, the difference operator D f(n) = f(n + 1) − f(n) satisfies an analogous relationship: Convolution theorem The convolution theorem states that where denotes the Fourier transform of . Convolution in other types of transformations Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform. Convolution on matrices If is the Fourier transform matrix, then , where is face-splitting product, denotes Kronecker product, denotes Hadamard product (this result is an evolving of count sketch properties). This can be generalized for appropriate matrices : from the properties of the face-splitting product. Translational equivariance The convolution commutes with translations, meaning that where τxf is the translation of the function f by x defined by If f is a Schwartz function, then τxf is the convolution with a translated Dirac delta function τxf = f ∗ τx δ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution. Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds Suppose that S is a bounded linear operator acting on functions which commutes with translations: S(τxf) = τx(Sf) for all x. Then S is given as convolution with a function (or distribution) gS; that is Sf = gS ∗ f. Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function gS is the impulse response of the transformation S. A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 ≤ p < ∞ is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers. Convolutions on groups If G is a suitable group endowed with a measure λ, and if f and g are real or complex valued integrable functions on G, then we can define their convolution by It is not commutative in general. In typical cases of interest G is a locally compact Hausdorff topological group and λ is a (left-) Haar measure. In that case, unless G is unimodular, the convolution defined in this way is not the same as . The preference of one over the other is made so that convolution with a fixed function g commutes with left translation in the group: Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former. On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T): The operator T is compact. A direct calculation shows that its adjoint T* is convolution with By the commutativity property cited above, T is normal: T* T = TT* . Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. Then S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above. A discrete example is a finite cyclic group of order n. Convolution operators are here represented by circulant matrices, and can be diagonalized by the discrete Fourier transform. A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in L2 by the Peter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform. Convolution of measures Let G be a (multiplicatively written) topological group. If μ and ν are Radon measures on G, then their convolution μ∗ν is defined as the pushforward measure of the group action and can be written as : for each measurable subset E of G. The convolution is also a Radon measure, whose total variation satisfies In the case when G is locally compact with (left-)Haar measure λ, and μ and ν are absolutely continuous with respect to a λ, so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. In fact, if either measure is absolutely continuous with respect to the Haar measure, then so is their convolution. If μ and ν are probability measures on the topological group then the convolution μ∗ν is the probability distribution of the sum X + Y of two independent random variables X and Y whose respective distributions are μ and ν. Infimal convolution In convex analysis, the infimal convolution of proper (not identically ) convex functions on is defined by: It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by the Legendre transform: We have: Bialgebras Let (X, Δ, ∇, ε, η) be a bialgebra with comultiplication Δ, multiplication ∇, unit η, and counit ε. The convolution is a product defined on the endomorphism algebra End(X) as follows. Let φ, ψ ∈ End(X), that is, φ, ψ: X → X are functions that respect all algebraic structure of X, then the convolution φ∗ψ is defined as the composition The convolution appears notably in the definition of Hopf algebras . A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism S such that Applications Convolution and related operations are found in many applications in science, engineering and mathematics. Convolutional neural networks apply multiple cascaded convolution kernels with applications in machine vision and artificial intelligence. Though these are actually cross-correlations rather than convolutions in most cases. In non-neural-network-based image processing In digital image processing convolutional filtering plays an important role in many important algorithms in edge detection and related processes (see Kernel (image processing)) In optics, an out-of-focus photograph is a convolution of the sharp image with a lens function. The photographic term for this is bokeh. In image processing applications such as adding blurring. In digital data processing In analytical chemistry, Savitzky–Golay smoothing filters are used for the analysis of spectroscopic data. They can improve signal-to-noise ratio with minimal distortion of the spectra In statistics, a weighted moving average is a convolution. In acoustics, reverberation is the convolution of the original sound with echoes from objects surrounding the sound source. In digital signal processing, convolution is used to map the impulse response of a real room on a digital audio signal. In electronic music convolution is the imposition of a spectral or rhythmic structure on a sound. Often this envelope or structure is taken from another sound. The convolution of two signals is the filtering of one through the other. In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred. In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance. For instance, in spectroscopy line broadening due to the Doppler effect on its own gives a Gaussian spectral line shape and collision broadening alone gives a Lorentzian line shape. When both effects are operative, the line shape is a convolution of Gaussian and Lorentzian, a Voigt function. In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse. In computational fluid dynamics, the large eddy simulation (LES) turbulence model uses the convolution operation to lower the range of length scales necessary in computation thereby reducing computational cost. In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions. In kernel density estimation, a distribution is estimated from sample points by convolution with a kernel, such as an isotropic Gaussian. In radiotherapy treatment planning systems, most part of all modern codes of calculation applies a convolution-superposition algorithm. In structural reliability, the reliability index can be defined based on the convolution theorem. The definition of reliability index for limit state functions with nonnormal distributions can be established corresponding to the joint distribution function. In fact, the joint distribution function can be obtained using the convolution theory. In Smoothed-particle hydrodynamics, simulations of fluid dynamics are calculated using particles, each with surrounding kernels. For any given particle , some physical quantity is calculated as a convolution of with a weighting function, where denotes the neighbors of particle : those that are located within its kernel. The convolution is approximated as a summation over each neighbor. In Fractional calculus convolution is instrumental in various definitions of fractional integral and fractional derivative.
Mathematics
Differential equations
null
7522
https://en.wikipedia.org/wiki/Calorimetry
Calorimetry
In chemistry and thermodynamics, calorimetry () is the science or act of measuring changes in state variables of a body for the purpose of deriving the heat transfer associated with changes of its state due, for example, to chemical reactions, physical changes, or phase transitions under specified constraints. Calorimetry is performed with a calorimeter. Scottish physician and scientist Joseph Black, who was the first to recognize the distinction between heat and temperature, is said to be the founder of the science of calorimetry. Indirect calorimetry calculates heat that living organisms produce by measuring either their production of carbon dioxide and nitrogen waste (frequently ammonia in aquatic organisms, or urea in terrestrial ones), or from their consumption of oxygen. Lavoisier noted in 1780 that heat production can be predicted from oxygen consumption this way, using multiple regression. The dynamic energy budget theory explains why this procedure is correct. Heat generated by living organisms may also be measured by direct calorimetry, in which the entire organism is placed inside the calorimeter for the measurement. A widely used modern instrument is the differential scanning calorimeter, a device which allows thermal data to be obtained on small amounts of material. It involves heating the sample at a controlled rate and recording the heat flow either into or from the specimen. Classical calorimetric calculation of heat Cases with differentiable equation of state for a one-component body Basic classical calculation with respect to volume Calorimetry requires that a reference material that changes temperature have known definite thermal constitutive properties. The classical rule, recognized by Clausius and Kelvin, is that the pressure exerted by the calorimetric material is fully and rapidly determined solely by its temperature and volume; this rule is for changes that do not involve phase change, such as melting of ice. There are many materials that do not comply with this rule, and for them, the present formula of classical calorimetry does not provide an adequate account. Here the classical rule is assumed to hold for the calorimetric material being used, and the propositions are mathematically written: The thermal response of the calorimetric material is fully described by its pressure as the value of its constitutive function of just the volume and the temperature . All increments are here required to be very small. This calculation refers to a domain of volume and temperature of the body in which no phase change occurs, and there is only one phase present. An important assumption here is continuity of property relations. A different analysis is needed for phase change When a small increment of heat is gained by a calorimetric body, with small increments, of its volume, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by where denotes the latent heat with respect to volume, of the calorimetric material at constant controlled temperature . The surroundings' pressure on the material is instrumentally adjusted to impose a chosen volume change, with initial volume . To determine this latent heat, the volume change is effectively the independently instrumentally varied quantity. This latent heat is not one of the widely used ones, but is of theoretical or conceptual interest. denotes the heat capacity, of the calorimetric material at fixed constant volume , while the pressure of the material is allowed to vary freely, with initial temperature . The temperature is forced to change by exposure to a suitable heat bath. It is customary to write simply as , or even more briefly as . This latent heat is one of the two widely used ones. The latent heat with respect to volume is the heat required for unit increment in volume at constant temperature. It can be said to be 'measured along an isotherm', and the pressure the material exerts is allowed to vary freely, according to its constitutive law . For a given material, it can have a positive or negative sign or exceptionally it can be zero, and this can depend on the temperature, as it does for water about 4 C. The concept of latent heat with respect to volume was perhaps first recognized by Joseph Black in 1762. The term 'latent heat of expansion' is also used. The latent heat with respect to volume can also be called the 'latent energy with respect to volume'. For all of these usages of 'latent heat', a more systematic terminology uses 'latent heat capacity'. The heat capacity at constant volume is the heat required for unit increment in temperature at constant volume. It can be said to be 'measured along an isochor', and again, the pressure the material exerts is allowed to vary freely. It always has a positive sign. This means that for an increase in the temperature of a body without change of its volume, heat must be supplied to it. This is consistent with common experience. Quantities like are sometimes called 'curve differentials', because they are measured along curves in the surface. Classical theory for constant-volume (isochoric) calorimetry Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. Heat is still measured by the above-stated principle of calorimetry. This means that in a suitably constructed calorimeter, called a bomb calorimeter, the increment of volume can be made to vanish, . For constant-volume calorimetry: where denotes the increment in temperature and denotes the heat capacity at constant volume. Classical heat calculation with respect to pressure From the above rule of calculation of heat with respect to volume, there follows one with respect to pressure. In a process of small increments, of its pressure, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by where denotes the latent heat with respect to pressure, of the calorimetric material at constant temperature, while the volume and pressure of the body are allowed to vary freely, at pressure and temperature ; denotes the heat capacity, of the calorimetric material at constant pressure, while the temperature and volume of the body are allowed to vary freely, at pressure and temperature . It is customary to write simply as , or even more briefly as . The new quantities here are related to the previous ones: where denotes the partial derivative of with respect to evaluated for and denotes the partial derivative of with respect to evaluated for . The latent heats and are always of opposite sign. It is common to refer to the ratio of specific heats as often just written as . Calorimetry through phase change, equation of state shows one jump discontinuity An early calorimeter was that used by Laplace and Lavoisier, as shown in the figure above. It worked at constant temperature, and at atmospheric pressure. The latent heat involved was then not a latent heat with respect to volume or with respect to pressure, as in the above account for calorimetry without phase change. The latent heat involved in this calorimeter was with respect to phase change, naturally occurring at constant temperature. This kind of calorimeter worked by measurement of mass of water produced by the melting of ice, which is a phase change. Cumulation of heating For a time-dependent process of heating of the calorimetric material, defined by a continuous joint progression of and , starting at time and ending at time , there can be calculated an accumulated quantity of heat delivered, . This calculation is done by mathematical integration along the progression with respect to time. This is because increments of heat are 'additive'; but this does not mean that heat is a conservative quantity. The idea that heat was a conservative quantity was invented by Lavoisier, and is called the 'caloric theory'; by the middle of the nineteenth century it was recognized as mistaken. Written with the symbol , the quantity is not at all restricted to be an increment with very small values; this is in contrast with . One can write . This expression uses quantities such as which are defined in the section below headed 'Mathematical aspects of the above rules'. Mathematical aspects of the above rules The use of 'very small' quantities such as is related to the physical requirement for the quantity to be 'rapidly determined' by and ; such 'rapid determination' refers to a physical process. These 'very small' quantities are used in the Leibniz approach to the infinitesimal calculus. The Newton approach uses instead 'fluxions' such as , which makes it more obvious that must be 'rapidly determined'. In terms of fluxions, the above first rule of calculation can be written where denotes the time denotes the time rate of heating of the calorimetric material at time denotes the time rate of change of volume of the calorimetric material at time denotes the time rate of change of temperature of the calorimetric material. The increment and the fluxion are obtained for a particular time that determines the values of the quantities on the righthand sides of the above rules. But this is not a reason to expect that there should exist a mathematical function . For this reason, the increment is said to be an 'imperfect differential' or an 'inexact differential'. Some books indicate this by writing instead of . Also, the notation đQ is used in some books. Carelessness about this can lead to error. The quantity is properly said to be a functional of the continuous joint progression of and , but, in the mathematical definition of a function, is not a function of . Although the fluxion is defined here as a function of time , the symbols and respectively standing alone are not defined here. Physical scope of the above rules of calorimetry The above rules refer only to suitable calorimetric materials. The terms 'rapidly' and 'very small' call for empirical physical checking of the domain of validity of the above rules. The above rules for the calculation of heat belong to pure calorimetry. They make no reference to thermodynamics, and were mostly understood before the advent of thermodynamics. They are the basis of the 'thermo' contribution to thermodynamics. The 'dynamics' contribution is based on the idea of work, which is not used in the above rules of calculation. Experimentally conveniently measured coefficients Empirically, it is convenient to measure properties of calorimetric materials under experimentally controlled conditions. Pressure increase at constant volume For measurements at experimentally controlled volume, one can use the assumption, stated above, that the pressure of the body of calorimetric material is can be expressed as a function of its volume and temperature. For measurement at constant experimentally controlled volume, the isochoric coefficient of pressure rise with temperature, is defined by Expansion at constant pressure For measurements at experimentally controlled pressure, it is assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure . This assumption is related to, but is not the same as, the above used assumption that the pressure of the body of calorimetric material is known as a function of its volume and temperature; anomalous behaviour of materials can affect this relation. The quantity that is conveniently measured at constant experimentally controlled pressure, the isobar volume expansion coefficient, is defined by Compressibility at constant temperature For measurements at experimentally controlled temperature, it is again assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure , with the same provisos as mentioned just above. The quantity that is conveniently measured at constant experimentally controlled temperature, the isothermal compressibility, is defined by Relation between classical calorimetric quantities Assuming that the rule is known, one can derive the function of that is used above in the classical heat calculation with respect to pressure. This function can be found experimentally from the coefficients and through the mathematically deducible relation . Connection between calorimetry and thermodynamics Thermodynamics developed gradually over the first half of the nineteenth century, building on the above theory of calorimetry which had been worked out before it, and on other discoveries. According to Gislason and Craig (2005): "Most thermodynamic data come from calorimetry..." According to Kondepudi (2008): "Calorimetry is widely used in present day laboratories." In terms of thermodynamics, the internal energy of the calorimetric material can be considered as the value of a function of , with partial derivatives and . Then it can be shown that one can write a thermodynamic version of the above calorimetric rules: with and . Again, further in terms of thermodynamics, the internal energy of the calorimetric material can sometimes, depending on the calorimetric material, be considered as the value of a function of , with partial derivatives and , and with being expressible as the value of a function of , with partial derivatives and . Then, according to Adkins (1975), it can be shown that one can write a further thermodynamic version of the above calorimetric rules: with and . Beyond the calorimetric fact noted above that the latent heats and are always of opposite sign, it may be shown, using the thermodynamic concept of work, that also Special interest of thermodynamics in calorimetry: the isothermal segments of a Carnot cycle Calorimetry has a special benefit for thermodynamics. It tells about the heat absorbed or emitted in the isothermal segment of a Carnot cycle. A Carnot cycle is a special kind of cyclic process affecting a body composed of material suitable for use in a heat engine. Such a material is of the kind considered in calorimetry, as noted above, that exerts a pressure that is very rapidly determined just by temperature and volume. Such a body is said to change reversibly. A Carnot cycle consists of four successive stages or segments: (1) a change in volume from a volume to a volume at constant temperature so as to incur a flow of heat into the body (known as an isothermal change) (2) a change in volume from to a volume at a variable temperature just such as to incur no flow of heat (known as an adiabatic change) (3) another isothermal change in volume from to a volume at constant temperature such as to incur a flow or heat out of the body and just such as to precisely prepare for the following change (4) another adiabatic change of volume from back to just such as to return the body to its starting temperature . In isothermal segment (1), the heat that flows into the body is given by     and in isothermal segment (3) the heat that flows out of the body is given by . Because the segments (2) and (4) are adiabats, no heat flows into or out of the body during them, and consequently the net heat supplied to the body during the cycle is given by . This quantity is used by thermodynamics and is related in a special way to the net work done by the body during the Carnot cycle. The net change of the body's internal energy during the Carnot cycle, , is equal to zero, because the material of the working body has the special properties noted above. Special interest of calorimetry in thermodynamics: relations between classical calorimetric quantities Relation of latent heat with respect to volume, and the equation of state The quantity , the latent heat with respect to volume, belongs to classical calorimetry. It accounts for the occurrence of energy transfer by work in a process in which heat is also transferred; the quantity, however, was considered before the relation between heat and work transfers was clarified by the invention of thermodynamics. In the light of thermodynamics, the classical calorimetric quantity is revealed as being tightly linked to the calorimetric material's equation of state . Provided that the temperature is measured in the thermodynamic absolute scale, the relation is expressed in the formula . Difference of specific heats Advanced thermodynamics provides the relation . From this, further mathematical and thermodynamic reasoning leads to another relation between classical calorimetric quantities. The difference of specific heats is given by . Practical constant-volume calorimetry (bomb calorimetry) for thermodynamic studies Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. No work is performed in constant-volume calorimetry, so the heat measured equals the change in internal energy of the system. The heat capacity at constant volume is assumed to be independent of temperature. Heat is measured by the principle of calorimetry. where ΔU is change in internal energy, ΔT is change in temperature and CV is the heat capacity at constant volume. In constant-volume calorimetry the pressure is not held constant. If there is a pressure difference between initial and final states, the heat measured needs adjustment to provide the enthalpy change. One then has where ΔH is change in enthalpy and V is the unchanging volume of the sample chamber.
Physical sciences
Instrumental methods
Chemistry
7534
https://en.wikipedia.org/wiki/Centripetal%20force
Centripetal force
A centripetal force (from Latin centrum, "center" and petere, "to seek") is a force that makes a body follow a curved path. The direction of the centripetal force is always orthogonal to the motion of the body and towards the fixed point of the instantaneous center of curvature of the path. Isaac Newton described it as "a force by which bodies are drawn or impelled, or in any way tend, towards a point as to a centre". In Newtonian mechanics, gravity provides the centripetal force causing astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path. The centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens. Formula From the kinematics of curved motion it is known that an object moving at tangential speed v along a path with radius of curvature r accelerates toward the center of curvature at a rate Here, is the centripetal acceleration and is the difference between the velocity vectors at and . By Newton's second law, the cause of acceleration is a net force acting on the object, which is proportional to its mass m and its acceleration. The force, usually referred to as a centripetal force, has a magnitude and is, like centripetal acceleration, directed toward the center of curvature of the object's trajectory. Derivation The centripetal acceleration can be inferred from the diagram of the velocity vectors at two instances. In the case of uniform circular motion the velocities have constant magnitude. Because each one is perpendicular to its respective position vector, simple vector subtraction implies two similar isosceles triangles with congruent angles – one comprising a base of and a leg length of , and the other a base of (position vector difference) and a leg length of : Therefore, can be substituted with : The direction of the force is toward the center of the circle in which the object is moving, or the osculating circle (the circle that best fits the local path of the object, if the path is not circular). The speed in the formula is squared, so twice the speed needs four times the force, at a given radius. This force is also sometimes written in terms of the angular velocity ω of the object about the center of the circle, related to the tangential velocity by the formula so that Expressed using the orbital period T for one revolution of the circle, the equation becomes In particle accelerators, velocity can be very high (close to the speed of light in vacuum) so the same rest mass now exerts greater inertia (relativistic mass) thereby requiring greater force for the same centripetal acceleration, so the equation becomes: where is the Lorentz factor. Thus the centripetal force is given by: which is the rate of change of relativistic momentum . Sources In the case of an object that is swinging around on the end of a rope in a horizontal plane, the centripetal force on the object is supplied by the tension of the rope. The rope example is an example involving a 'pull' force. The centripetal force can also be supplied as a 'push' force, such as in the case where the normal reaction of a wall supplies the centripetal force for a wall of death or a Rotor rider. Newton's idea of a centripetal force corresponds to what is nowadays referred to as a central force. When a satellite is in orbit around a planet, gravity is considered to be a centripetal force even though in the case of eccentric orbits, the gravitational force is directed towards the focus, and not towards the instantaneous center of curvature. Another example of centripetal force arises in the helix that is traced out when a charged particle moves in a uniform magnetic field in the absence of other external forces. In this case, the magnetic force is the centripetal force that acts towards the helix axis. Analysis of several cases Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration. Uniform circular motion Uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case. Calculus derivation In two dimensions, the position vector , which has magnitude (length) and directed at an angle above the x-axis, can be expressed in Cartesian coordinates using the unit vectors and : The assumption of uniform circular motion requires three things: The object moves only on a circle. The radius of the circle does not change in time. The object moves with constant angular velocity around the circle. Therefore, where is time. The velocity and acceleration of the motion are the first and second derivatives of position with respect to time: The term in parentheses is the original expression of in Cartesian coordinates. Consequently, negative shows that the acceleration is pointed towards the center of the circle (opposite the radius), hence it is called "centripetal" (i.e. "center-seeking"). While objects naturally follow a straight path (due to inertia), this centripetal acceleration describes the circular motion path caused by a centripetal force. Derivation using vectors The image at right shows the vector relationships for uniform circular motion. The rotation itself is represented by the angular velocity vector Ω, which is normal to the plane of the orbit (using the right-hand rule) and has magnitude given by: with θ the angular position at time t. In this subsection, dθ/dt is assumed constant, independent of time. The distance traveled dℓ of the particle in time dt along the circular path is which, by properties of the vector cross product, has magnitude rdθ and is in the direction tangent to the circular path. Consequently, In other words, Differentiating with respect to time, Lagrange's formula states: Applying Lagrange's formula with the observation that Ω • r(t) = 0 at all times, In words, the acceleration is pointing directly opposite to the radial displacement r at all times, and has a magnitude: where vertical bars |...| denote the vector magnitude, which in the case of r(t) is simply the radius r of the path. This result agrees with the previous section, though the notation is slightly different. When the rate of rotation is made constant in the analysis of nonuniform circular motion, that analysis agrees with this one. A merit of the vector approach is that it is manifestly independent of any coordinate system. Example: The banked turn The upper panel in the image at right shows a ball in circular motion on a banked curve. The curve is banked at an angle θ from the horizontal, and the surface of the road is considered to be slippery. The objective is to find what angle the bank must have so the ball does not slide off the road. Intuition tells us that, on a flat curve with no banking at all, the ball will simply slide off the road; while with a very steep banking, the ball will slide to the center unless it travels the curve rapidly. Apart from any acceleration that might occur in the direction of the path, the lower panel of the image above indicates the forces on the ball. There are two forces; one is the force of gravity vertically downward through the center of mass of the ball mg, where m is the mass of the ball and g is the gravitational acceleration; the second is the upward normal force exerted by the road at a right angle to the road surface man. The centripetal force demanded by the curved motion is also shown above. This centripetal force is not a third force applied to the ball, but rather must be provided by the net force on the ball resulting from vector addition of the normal force and the force of gravity. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the centripetal force dictated by the need to travel a circular path. The curved motion is maintained so long as this net force provides the centripetal force requisite to the motion. The horizontal net force on the ball is the horizontal component of the force from the road, which has magnitude . The vertical component of the force from the road must counteract the gravitational force: , which implies . Substituting into the above formula for yields a horizontal force to be: On the other hand, at velocity |v| on a circular path of radius r, kinematics says that the force needed to turn the ball continuously into the turn is the radially inward centripetal force Fc of magnitude: Consequently, the ball is in a stable path when the angle of the road is set to satisfy the condition: or, As the angle of bank θ approaches 90°, the tangent function approaches infinity, allowing larger values for |v|2/r. In words, this equation states that for greater speeds (bigger |v|) the road must be banked more steeply (a larger value for θ), and for sharper turns (smaller r) the road also must be banked more steeply, which accords with intuition. When the angle θ does not satisfy the above condition, the horizontal component of force exerted by the road does not provide the correct centripetal force, and an additional frictional force tangential to the road surface is called upon to provide the difference. If friction cannot do this (that is, the coefficient of friction is exceeded), the ball slides to a different radius where the balance can be realized. These ideas apply to air flight as well. See the FAA pilot's manual. Nonuniform circular motion As a generalization of the uniform circular motion case, suppose the angular rate of rotation is not constant. The acceleration now has a tangential component, as shown the image at right. This case is used to demonstrate a derivation strategy based on a polar coordinate system. Let r(t) be a vector that describes the position of a point mass as a function of time. Since we are assuming circular motion, let , where R is a constant (the radius of the circle) and ur is the unit vector pointing from the origin to the point mass. The direction of ur is described by θ, the angle between the x-axis and the unit vector, measured counterclockwise from the x-axis. The other unit vector for polar coordinates, uθ is perpendicular to ur and points in the direction of increasing θ. These polar unit vectors can be expressed in terms of Cartesian unit vectors in the x and y directions, denoted and respectively: and One can differentiate to find velocity: where is the angular velocity . This result for the velocity matches expectations that the velocity should be directed tangentially to the circle, and that the magnitude of the velocity should be . Differentiating again, and noting that we find that the acceleration, a is: Thus, the radial and tangential components of the acceleration are: and where is the magnitude of the velocity (the speed). These equations express mathematically that, in the case of an object that moves along a circular path with a changing speed, the acceleration of the body may be decomposed into a perpendicular component that changes the direction of motion (the centripetal acceleration), and a parallel, or tangential component, that changes the speed. General planar motion Polar coordinates The above results can be derived perhaps more simply in polar coordinates, and at the same time extended to general motion within a plane, as shown next. Polar coordinates in the plane employ a radial unit vector uρ and an angular unit vector uθ, as shown above. A particle at position r is described by: where the notation ρ is used to describe the distance of the path from the origin instead of R to emphasize that this distance is not fixed, but varies with time. The unit vector uρ travels with the particle and always points in the same direction as r(t). Unit vector uθ also travels with the particle and stays orthogonal to uρ. Thus, uρ and uθ form a local Cartesian coordinate system attached to the particle, and tied to the path travelled by the particle. By moving the unit vectors so their tails coincide, as seen in the circle at the left of the image above, it is seen that uρ and uθ form a right-angled pair with tips on the unit circle that trace back and forth on the perimeter of this circle with the same angle θ(t) as r(t). When the particle moves, its velocity is To evaluate the velocity, the derivative of the unit vector uρ is needed. Because uρ is a unit vector, its magnitude is fixed, and it can change only in direction, that is, its change duρ has a component only perpendicular to uρ. When the trajectory r(t) rotates an amount dθ, uρ, which points in the same direction as r(t), also rotates by dθ. See image above. Therefore, the change in uρ is or In a similar fashion, the rate of change of uθ is found. As with uρ, uθ is a unit vector and can only rotate without changing size. To remain orthogonal to uρ while the trajectory r(t) rotates an amount dθ, uθ, which is orthogonal to r(t), also rotates by dθ. See image above. Therefore, the change duθ is orthogonal to uθ and proportional to dθ (see image above): The equation above shows the sign to be negative: to maintain orthogonality, if duρ is positive with dθ, then duθ must decrease. Substituting the derivative of uρ into the expression for velocity: To obtain the acceleration, another time differentiation is done: Substituting the derivatives of uρ and uθ, the acceleration of the particle is: As a particular example, if the particle moves in a circle of constant radius R, then dρ/dt = 0, v = vθ, and: where These results agree with those above for nonuniform circular motion.
Physical sciences
Classical mechanics
null
7543
https://en.wikipedia.org/wiki/Computational%20complexity%20theory
Computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is part of the field of computational complexity. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically. Computational problems Problem instances A computational problem can be viewed as an infinite collection of instances together with a set (possibly empty) of solutions for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the travelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances. Representing problem instances When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems as formal languages Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is either yes or no (alternatively, 1 or 0). A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected or not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. Function problems A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output is not just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples such that the relation holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers. Measuring the size of an instance To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with vertices compared to the time taken for a graph with vertices? If the input size is , the time taken can be expressed as a function of . Since the time taken on different inputs of the same size can be different, the worst-case time complexity is defined to be the maximum time taken over all inputs of size . If is a polynomial in , then the algorithm is said to be a polynomial time algorithm. Cobham's thesis argues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm. Machine models and complexity measures Turing machine A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata, lambda calculus or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others. A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm. Other machine models Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically. However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that non-deterministic time is a very important resource in analyzing computational problems. Complexity measures For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine on input is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine is said to operate within time if the time required by on each input of length is at most . A decision problem can be solved in time if there exists a Turing machine operating in time that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time on a deterministic Turing machine is then denoted by DTIME(). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity. The complexity of an algorithm is often expressed using big O notation. Best, worst and average case complexity The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size may be faster to solve than others, we define the following complexities: Best-case complexity: This is the complexity of solving the problem for the best input of size . Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size . Amortized analysis: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm. Worst-case complexity: This is the complexity of solving the problem for the worst input of size . The order from cheap to costly is: Best, average (of discrete uniform distribution), amortized, worst. For example, the deterministic sorting algorithm quicksort addresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes time O(). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is . The best case occurs when each pivoting divides the list in half, also needing time. Upper and lower bounds on the complexity of problems To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most . However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of for a problem requires showing that no algorithm can have time complexity lower than . Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if , in big O notation one would write . Complexity classes Defining complexity classes A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors: The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc. The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc. The resource (or resources) that is being bounded and the bound: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc. Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following: The set of decision problems solvable by a deterministic Turing machine within time . (This complexity class is known as DTIME().) But bounding the computation time above by some concrete function often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" . This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP. Important complexity classes Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following: Logarithmic-space classes do not account for the space required to represent the problem. It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem. Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems. Hierarchy theorems For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME() is contained in DTIME(), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. More precisely, the time hierarchy theorem states that . The space hierarchy theorem states that . The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE. Reduction Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problem can be solved using an algorithm for , is no more difficult than , and we say that reduces to . There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions. The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication. This motivates the concept of a problem being hard for a complexity class. A problem is hard for a class of problems if every problem in can be reduced to . Thus no problem in is harder than , since an algorithm for allows us to solve any problem in . The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems. If a problem is in and hard for , then is said to be complete for . This means that is the hardest problem in . (Since many problems could be equally hard, one might say that is one of the hardest problems in .) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, , to another problem, , would indicate that there is no known polynomial-time solution for . This is because a polynomial-time solution to would yield a polynomial-time solution to . Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP. Important open problems P versus NP problem The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem. Problems in NP not known to be in P or NP-complete It was shown by Ladner that if then there exist problems in that are neither in nor -complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in or to be -complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in , -complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and Eugene Luks has run time for graphs with vertices, although some recent work by Babai offers some potentially new perspectives on this. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than . No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in and in (and even in UP and co-UP). If the problem is -complete, the polynomial time hierarchy will collapse to its first level (i.e., will equal ). The best known algorithm for integer factorization is the general number field sieve, which takes time to factor an odd integer . However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes. Separations between other complexity classes Many known complexity classes are suspected to be unequal, but this has not been proved. For instance , but it is possible that . If is not equal to , then is not equal to either. Since there are many known complexity classes between and , such as , , , , , , etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory. Along the same lines, is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of problems. It is believed that is not equal to ; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then is not equal to , since . Thus if we would have whence . Similarly, it is not known if (the set of all problems that can be solved in logarithmic space) is strictly contained in or equal to . Again, there are many complexity classes between the two, such as and , and it is not known if they are distinct or equal classes. It is suspected that and are equal. However, it is currently open if . Intractability A problem that can theoretically be solved, but requires impractical and finite resources (e.g., time) to do so, is known as an . Conversely, a problem that can be solved in practice is called a , literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization. Tractable problems are frequently identified with problems that have polynomial-time solutions (, ); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If is not the same as , then NP-hard problems are also intractable in this sense. However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in , yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes operations before halting. For small , say 100, and assuming for the sake of example that the computer does operations each second, the program would run for about years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes operations is practical until gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, , it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even or algorithms are often impractical on realistic sizes of problems. Continuous complexity theory Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis is information based complexity. Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical systems and differential equations. Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems. History An early example of algorithm complexity analysis is the running time analysis of the Euclidean algorithm done by Gabriel Lamé in 1844. Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer. The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard E. Stearns, which laid out the definitions of time complexity and space complexity, and proved the hierarchy theorems. In addition, in 1965 Edmonds suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size. Earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers: In 1967, Manuel Blum formulated a set of axioms (now known as Blum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called speed-up theorem. The field began to flourish in 1971 when Stephen Cook and Leonid Levin proved the existence of practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete.
Mathematics
Discrete mathematics
null
7555
https://en.wikipedia.org/wiki/Casimir%20effect
Casimir effect
In quantum field theory, the Casimir effect (or Casimir force) is a physical force acting on the macroscopic boundaries of a confined space which arises from the quantum fluctuations of a field. The term Casimir pressure is sometimes used when it is described in units of force per unit area. It is named after the Dutch physicist Hendrik Casimir, who predicted the effect for electromagnetic systems in 1948. In the same year Casimir, together with Dirk Polder, described a similar effect experienced by a neutral atom in the vicinity of a macroscopic interface which is called the Casimir–Polder force. Their result is a generalization of the London–van der Waals force and includes retardation due to the finite speed of light. The fundamental principles leading to the London–van der Waals force, the Casimir force, and the Casimir–Polder force can be formulated on the same footing. In 1997 a direct experiment by Steven K. Lamoreaux quantitatively measured the Casimir force to be within 5% of the value predicted by the theory. The Casimir effect can be understood by the idea that the presence of macroscopic material interfaces, such as electrical conductors and dielectrics, alter the vacuum expectation value of the energy of the second-quantized electromagnetic field. Since the value of this energy depends on the shapes and positions of the materials, the Casimir effect manifests itself as a force between such objects. Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string as well as plates submerged in turbulent water or gas illustrate the Casimir force. In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; in applied physics it is significant in some aspects of emerging microtechnologies and nanotechnologies. Physical properties The typical example is of two uncharged conductive plates in a vacuum, placed a few nanometers apart. In a classical description, the lack of an external field means that no field exists between the plates, and no force connects them. When this field is instead studied using the quantum electrodynamic vacuum, it is seen that the plates do affect the virtual photons that constitute the field, and generate a net force – either an attraction or a repulsion depending on the plates' specific arrangement. Although the Casimir effect can be expressed in terms of virtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of a quantized field in the intervening space between the objects. This force has been measured and is a striking example of an effect captured formally by second quantization. The treatment of boundary conditions in these calculations is controversial. In fact, "Casimir's original goal was to compute the van der Waals force between polarizable molecules" of the conductive plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields. Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is small. This force becomes so strong that it becomes the dominant force between uncharged conductors at submicron scales. In fact, at separations of 10 nm – about 100 times the typical size of an atom – the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depends on surface geometry and other factors). History Dutch physicists Hendrik Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947; this special form is called the Casimir–Polder force. After a conversation with Niels Bohr, who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948. This latter phenomenon is called the Casimir effect. Predictions of the force were later extended to finite-conductivity metals and dielectrics, while later calculations considered more general geometries. Experiments before 1997 observed the force qualitatively, and indirect validation of the predicted Casimir energy was made by measuring the thickness of liquid helium films. Finally, in 1997 Lamoreaux's direct experiment quantitatively measured the force to within 5% of the value predicted by the theory. Subsequent experiments approached an accuracy of a few percent. Possible causes Vacuum energy The causes of the Casimir effect are described by quantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum. The vacuum has, implicitly, all of the properties that a particle may have: spin, or polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is Summing over all possible oscillators at all points in space gives an infinite quantity. Since only differences in energy are physically measurable (with the notable exception of gravitation, which remains beyond the scope of quantum field theory), this infinity may be considered a feature of the mathematics rather than of the physics. This argument is the underpinning of the theory of renormalization. Dealing with infinite quantities in this way was a cause of widespread unease among quantum field theorists before the development in the 1970s of the renormalization group, a mathematical formalism for scale transformations that provides a natural basis for the process. When the scope of the physics is widened to include gravity, the interpretation of this formally infinite quantity remains problematic. There is currently no compelling explanation as to why it should not result in a cosmological constant that is many orders of magnitude larger than observed. However, since we do not yet have any fully coherent quantum theory of gravity, there is likewise no compelling reason as to why it should instead actually result in the value of the cosmological constant that we observe. The Casimir effect for fermions can be understood as the spectral asymmetry of the fermion operator , where it is known as the Witten index. Relativistic van der Waals force Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha approaching infinity limit", and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates." Casimir and Polder's original paper used this method to derive the Casimir–Polder force. In 1978, Schwinger, DeRadd, and Milton published a similar derivation for the Casimir effect between two parallel plates. More recently, Nikolic proved from first principles of quantum electrodynamics that the Casimir force does not originate from the vacuum energy of the electromagnetic field, and explained in simple terms why the fundamental microscopic origin of Casimir force lies in van der Waals forces. Effects Casimir's observation was that the second-quantized quantum electromagnetic field, in the presence of bulk bodies such as metals or dielectrics, must obey the same boundary conditions that the classical electromagnetic field must obey. In particular, this affects the calculation of the vacuum energy in the presence of a conductor or dielectric. Consider, for example, the calculation of the vacuum expectation value of the electromagnetic field inside a metal cavity, such as, for example, a radar cavity or a microwave waveguide. In this case, the correct way to find the zero-point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the th standing wave is . The vacuum expectation value of the energy of the electromagnetic field in the cavity is then with the sum running over all possible values of enumerating the standing waves. The factor of is present because the zero-point energy of the th mode is , where is the energy increment for the th mode. (It is the same as appears in the equation .) Written in this way, this sum is clearly divergent; however, it can be used to create finite expressions. In particular, one may ask how the zero-point energy depends on the shape of the cavity. Each energy level depends on the shape, and so one should write for the energy level, and for the vacuum expectation value. At this point comes an important observation: The force at point on the wall of the cavity is equal to the change in the vacuum energy if the shape of the wall is perturbed a little bit, say by , at . That is, one has This value is finite in many practical calculations. Attraction between the plates can be easily understood by focusing on the one-dimensional situation. Suppose that a moveable conductive plate is positioned at a short distance from one of two widely separated plates (distance apart). With , the states within the slot of width are highly constrained so that the energy of any one mode is widely separated from that of the next. This is not the case in the large region where there is a large number of states (about ) with energy evenly spaced between and the next mode in the narrow slot, or in other words, all slightly larger than . Now on shortening by an amount (which is negative), the mode in the narrow slot shrinks in wavelength and therefore increases in energy proportional to , whereas all the states that lie in the large region lengthen and correspondingly decrease their energy by an amount proportional to (note the different denominator). The two effects nearly cancel, but the net change is slightly negative, because the energy of all the modes in the large region are slightly larger than the single mode in the slot. Thus the force is attractive: it tends to make slightly smaller, the plates drawing each other closer, across the thin slot. Derivation of Casimir effect assuming zeta-regularization In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance apart. In this case, the standing waves are particularly easy to calculate, because the transverse component of the electric field and the normal component of the magnetic field must vanish on the surface of a conductor. Assuming the plates lie parallel to the -plane, the standing waves are where stands for the electric component of the electromagnetic field, and, for brevity, the polarization and the magnetic components are ignored here. Here, and are the wavenumbers in directions parallel to the plates, and is the wavenumber perpendicular to the plates. Here, is an integer, resulting from the requirement that vanish on the metal plates. The frequency of this wave is where is the speed of light. The vacuum energy is then the sum over all possible excitation modes. Since the area of the plates is large, we may sum by integrating over two of the dimensions in -space. The assumption of periodic boundary conditions yields, where is the area of the metal plates, and a factor of 2 is introduced for the two possible polarizations of the wave. This expression is clearly infinite, and to proceed with the calculation, it is convenient to introduce a regulator (discussed in greater detail below). The regulator will serve to make the expression finite, and in the end will be removed. The zeta-regulated version of the energy per unit-area of the plate is In the end, the limit is to be taken. Here is just a complex number, not to be confused with the shape discussed previously. This integral sum is finite for real and larger than 3. The sum has a pole at , but may be analytically continued to , where the expression is finite. The above expression simplifies to: where polar coordinates were introduced to turn the double integral into a single integral. The in front is the Jacobian, and the comes from the angular integration. The integral converges if , resulting in The sum diverges at in the neighborhood of zero, but if the damping of large-frequency excitations corresponding to analytic continuation of the Riemann zeta function to is assumed to make sense physically in some way, then one has But and so one obtains The analytic continuation has evidently lost an additive positive infinity, somehow exactly accounting for the zero-point energy (not included above) outside the slot between the plates, but which changes upon plate movement within a closed system. The Casimir force per unit area for idealized, perfectly conducting plates with vacuum between them is where is the reduced Planck constant, is the speed of light, is the distance between the two plates The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of shows that the Casimir force per unit area is very small, and that furthermore, the force is inherently of quantum-mechanical origin. By integrating the equation above it is possible to calculate the energy required to separate to infinity the two plates as: where is the reduced Planck constant, is the speed of light, is the area of one of the plates, is the distance between the two plates In Casimir's original derivation, a moveable conductive plate is positioned at a short distance from one of two widely separated plates (distance apart). The zero-point energy on both sides of the plate is considered. Instead of the above ad hoc analytic continuation assumption, non-convergent sums and integrals are computed using Euler–Maclaurin summation with a regularizing function (e.g., exponential regularization) not so anomalous as in the above. More recent theory Casimir's analysis of idealized metal plates was generalized to arbitrary dielectric and realistic metal plates by Evgeny Lifshitz and his students. Using this approach, complications of the bounding surfaces, such as the modifications to the Casimir force due to finite conductivity, can be calculated numerically using the tabulated complex dielectric functions of the bounding materials. Lifshitz's theory for two metal plates reduces to Casimir's idealized force law for large separations much greater than the skin depth of the metal, and conversely reduces to the force law of the London dispersion force (with a coefficient called a Hamaker constant) for small , with a more complicated dependence on for intermediate separations determined by the dispersion of the materials. Lifshitz's result was subsequently generalized to arbitrary multilayer planar geometries as well as to anisotropic and magnetic materials, but for several decades the calculation of Casimir forces for non-planar geometries remained limited to a few idealized cases admitting analytical solutions. For example, the force in the experimental sphere–plate geometry was computed with an approximation (due to Derjaguin) that the sphere radius is much larger than the separation , in which case the nearby surfaces are nearly parallel and the parallel-plate result can be adapted to obtain an approximate force (neglecting both skin-depth and higher-order curvature effects). However, in the 2010s a number of authors developed and demonstrated a variety of numerical techniques, in many cases adapted from classical computational electromagnetics, that are capable of accurately calculating Casimir forces for arbitrary geometries and materials, from simple finite-size effects of finite plates to more complicated phenomena arising for patterned surfaces or objects of various shapes. Measurement One of the first experimental tests was conducted by Marcus Sparnaay at Philips in Eindhoven (Netherlands), in 1958, in a delicate and difficult experiment with parallel plates, obtaining results not in contradiction with the Casimir theory, but with large experimental errors. The Casimir effect was measured more accurately in 1997 by Steve K. Lamoreaux of Los Alamos National Laboratory, and by Umar Mohideen and Anushree Roy of the University of California, Riverside. In practice, rather than using two parallel plates, which would require phenomenally accurate alignment to ensure they were parallel, the experiments use one plate that is flat and another plate that is a part of a sphere with a very large radius. In 2001, a group (Giacomo Bressi, Gianni Carugno, Roberto Onofrio and Giuseppe Ruoso) at the University of Padua (Italy) finally succeeded in measuring the Casimir force between parallel plates using microresonators. Numerous variations of these experiments are summarized in the 2009 review by Klimchitskaya. In 2013, a conglomerate of scientists from Hong Kong University of Science and Technology, University of Florida, Harvard University, Massachusetts Institute of Technology, and Oak Ridge National Laboratory demonstrated a compact integrated silicon chip that can measure the Casimir force. The integrated chip defined by electron-beam lithography does not need extra alignment, making it an ideal platform for measuring Casimir force between complex geometries. In 2017 and 2021, the same group from Hong Kong University of Science and Technology demonstrated the non-monotonic Casimir force and distance-independent Casimir force, respectively, using this on-chip platform. Regularization In order to be able to perform calculations in the general case, it is convenient to introduce a regulator in the summations. This is an artificial device, used to make the sums finite so that they can be more easily manipulated, followed by the taking of a limit so as to remove the regulator. The heat kernel or exponentially regulated sum is where the limit is taken in the end. The divergence of the sum is typically manifested as for three-dimensional cavities. The infinite part of the sum is associated with the bulk constant which does not depend on the shape of the cavity. The interesting part of the sum is the finite part, which is shape-dependent. The Gaussian regulator is better suited to numerical calculations because of its superior convergence properties, but is more difficult to use in theoretical calculations. Other, suitably smooth, regulators may be used as well. The zeta function regulator is completely unsuited for numerical calculations, but is quite useful in theoretical calculations. In particular, divergences show up as poles in the complex plane, with the bulk divergence at . This sum may be analytically continued past this pole, to obtain a finite part at . Not every cavity configuration necessarily leads to a finite part (the lack of a pole at ) or shape-independent infinite parts. In this case, it should be understood that additional physics has to be taken into account. In particular, at extremely large frequencies (above the plasma frequency), metals become transparent to photons (such as X-rays), and dielectrics show a frequency-dependent cutoff as well. This frequency dependence acts as a natural regulator. There are a variety of bulk effects in solid state physics, mathematically very similar to the Casimir effect, where the cutoff frequency comes into explicit play to keep expressions finite. (These are discussed in greater detail in Landau and Lifshitz, "Theory of Continuous Media".) Generalities The Casimir effect can also be computed using the mathematical mechanisms of functional integrals of quantum field theory, although such calculations are considerably more abstract, and thus difficult to comprehend. In addition, they can be carried out only for the simplest of geometries. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called "virtual particles". More interesting is the understanding that the sums over the energies of standing waves should be formally understood as sums over the eigenvalues of a Hamiltonian. This allows atomic and molecular effects, such as the Van der Waals force, to be understood as a variation on the theme of the Casimir effect. Thus one considers the Hamiltonian of a system as a function of the arrangement of objects, such as atoms, in configuration space. The change in the zero-point energy as a function of changes of the configuration can be understood to result in forces acting between the objects. In the chiral bag model of the nucleon, the Casimir energy plays an important role in showing the mass of the nucleon is independent of the bag radius. In addition, the spectral asymmetry is interpreted as a non-zero vacuum expectation value of the baryon number, cancelling the topological winding number of the pion field surrounding the nucleon. A "pseudo-Casimir" effect can be found in liquid crystal systems, where the boundary conditions imposed through anchoring by rigid walls give rise to a long-range force, analogous to the force that arises between conducting plates. Dynamical Casimir effect The dynamical Casimir effect is the production of particles and energy from an accelerated moving mirror. This reaction was predicted by certain numerical solutions to quantum mechanics equations made in the 1970s. In May 2011 an announcement was made by researchers at the Chalmers University of Technology, in Gothenburg, Sweden, of the detection of the dynamical Casimir effect. In their experiment, microwave photons were generated out of the vacuum in a superconducting microwave resonator. These researchers used a modified SQUID to change the effective length of the resonator in time, mimicking a mirror moving at the required relativistic velocity. If confirmed this would be the first experimental verification of the dynamical Casimir effect. In March 2013 an article appeared on the PNAS scientific journal describing an experiment that demonstrated the dynamical Casimir effect in a Josephson metamaterial. In July 2019 an article was published describing an experiment providing evidence of optical dynamical Casimir effect in a dispersion-oscillating fibre. In 2020, Frank Wilczek et al., proposed a resolution to the information loss paradox associated with the moving mirror model of the dynamical Casimir effect. Constructed within the framework of quantum field theory in curved spacetime, the dynamical Casimir effect (moving mirror) has been used to help understand the Unruh effect. Repulsive forces There are few instances wherein the Casimir effect can give rise to repulsive forces between uncharged objects. Evgeny Lifshitz showed (theoretically) that in certain circumstances (most commonly involving liquids), repulsive forces can arise. This has sparked interest in applications of the Casimir effect toward the development of levitating devices. An experimental demonstration of the Casimir-based repulsion predicted by Lifshitz was carried out by Munday et al. who described it as "quantum levitation". Other scientists have also suggested the use of gain media to achieve a similar levitation effect, though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium (Kramers–Kronig relations). Casimir and Casimir–Polder repulsion can in fact occur for sufficiently anisotropic electrical bodies; for a review of the issues involved with repulsion see Milton et al. A notable recent development on repulsive Casimir forces relies on using chiral materials. Q.-D. Jiang at Stockholm University and Nobel Laureate Frank Wilczek at MIT show that chiral "lubricant" can generate repulsive, enhanced, and tunable Casimir interactions. Timothy Boyer showed in his work published in 1968 that a conductor with spherical symmetry will also show this repulsive force, and the result is independent of radius. Further work shows that the repulsive force can be generated with materials of carefully chosen dielectrics. Speculative applications It has been suggested that the Casimir forces have application in nanotechnology, in particular silicon integrated circuit technology based micro- and nanoelectromechanical systems, and so-called Casimir oscillators. In 1995 and 1998 Maclay et al. published the first models of a microelectromechanical system (MEMS) with Casimir forces. While not exploiting the Casimir force for useful work, the papers drew attention from the MEMS community due to the revelation that Casimir effect needs to be considered as a vital factor in the future design of MEMS. In particular, Casimir effect might be the critical factor in the stiction failure of MEMS. In 2001, Capasso et al. showed how the force can be used to control the mechanical motion of a MEMS device, The researchers suspended a polysilicon plate from a torsional rod – a twisting horizontal bar just a few microns in diameter. When they brought a metallized sphere close up to the plate, the attractive Casimir force between the two objects made the plate rotate. They also studied the dynamical behaviour of the MEMS device by making the plate oscillate. The Casimir force reduced the rate of oscillation and led to nonlinear phenomena, such as hysteresis and bistability in the frequency response of the oscillator. According to the team, the system's behaviour agreed well with theoretical calculations. The Casimir effect shows that quantum field theory allows the energy density in very small regions of space to be negative relative to the ordinary vacuum energy, and the energy densities cannot be arbitrarily negative as the theory breaks down at atomic distances. Such prominent physicists such as Stephen Hawking and Kip Thorne, have speculated that such effects might make it possible to stabilize a traversable wormhole.
Physical sciences
Quantum mechanics
Physics
7583
https://en.wikipedia.org/wiki/Cauchy%E2%80%93Riemann%20equations
Cauchy–Riemann equations
In the field of complex analysis in mathematics, the Cauchy–Riemann equations, named after Augustin Cauchy and Bernhard Riemann, consist of a system of two partial differential equations which form a necessary and sufficient condition for a complex function of a complex variable to be complex differentiable. These equations are and where and are real differentiable bivariate functions. Typically, and are respectively the real and imaginary parts of a complex-valued function of a single complex variable where and are real variables; and are real differentiable functions of the real variables. Then is complex differentiable at a complex point if and only if the partial derivatives of and satisfy the Cauchy–Riemann equations at that point. A holomorphic function is a complex function that is differentiable at every point of some open subset of the complex plane . It has been proved that holomorphic functions are analytic and analytic complex functions are complex-differentiable. In particular, holomorphic functions are infinitely complex-differentiable. This equivalence between differentiability and analyticity is the starting point of all complex analysis. History The Cauchy–Riemann equations first appeared in the work of Jean le Rond d'Alembert. Later, Leonhard Euler connected this system to the analytic functions. Cauchy then used these equations to construct his theory of functions. Riemann's dissertation on the theory of functions appeared in 1851. Simple example Suppose that . The complex-valued function is differentiable at any point in the complex plane. The real part and the imaginary part are and their partial derivatives are We see that indeed the Cauchy–Riemann equations are satisfied, and . Interpretation and reformulation The Cauchy-Riemann equations are one way of looking at the condition for a function to be differentiable in the sense of complex analysis: in other words, they encapsulate the notion of function of a complex variable by means of conventional differential calculus. In the theory there are several other major ways of looking at this notion, and the translation of the condition into other language is often needed. Conformal mappings First, the Cauchy–Riemann equations may be written in complex form In this form, the equations correspond structurally to the condition that the Jacobian matrix is of the form where and . A matrix of this form is the matrix representation of a complex number. Geometrically, such a matrix is always the composition of a rotation with a scaling, and in particular preserves angles. The Jacobian of a function takes infinitesimal line segments at the intersection of two curves in and rotates them to the corresponding segments in . Consequently, a function satisfying the Cauchy–Riemann equations, with a nonzero derivative, preserves the angle between curves in the plane. That is, the Cauchy–Riemann equations are the conditions for a function to be conformal. Moreover, because the composition of a conformal transformation with another conformal transformation is also conformal, the composition of a solution of the Cauchy–Riemann equations with a conformal map must itself solve the Cauchy–Riemann equations. Thus the Cauchy–Riemann equations are conformally invariant. Complex differentiability Let where and are real-valued functions, be a complex-valued function of a complex variable where and are real variables. so the function can also be regarded as a function of real variables and . Then, the complex-derivative of at a point is defined by provided this limit exists (that is, the limit exists along every path approaching , and does not depend on the chosen path). A fundamental result of complex analysis is that is complex differentiable at (that is, it has a complex-derivative), if and only if the bivariate real functions and are differentiable at and satisfy the Cauchy–Riemann equations at this point. In fact, if the complex derivative exists at , then it may be computed by taking the limit at along the real axis and the imaginary axis, and the two limits must be equal. Along the real axis, the limit is and along the imaginary axis, the limit is So, the equality of the derivatives implies which is the complex form of Cauchy–Riemann equations at . (Note that if is complex differentiable at , it is also real differentiable and the Jacobian of at is the complex scalar , regarded as a real-linear map of , since the limit as .) Conversely, if is differentiable at (in the real sense) and satisfies the Cauchy-Riemann equations there, then it is complex-differentiable at this point. Assume that as a function of two real variables and is differentiable at (real differentiable). This is equivalent to the existence of the following linear approximation where , , , and as . Since and , the above can be re-written as Now, if is real, , while if it is imaginary, then . Therefore, the second term is independent of the path of the limit when (and only when) it vanishes identically: , which is precisely the Cauchy–Riemann equations in the complex form. This proof also shows that, in that case, Note that the hypothesis of real differentiability at the point is essential and cannot be dispensed with. For example, the function , regarded as a complex function with imaginary part identically zero, has both partial derivatives at , and it moreover satisfies the Cauchy–Riemann equations at that point, but it is not differentiable in the sense of real functions (of several variables), and so the first condition, that of real differentiability, is not met. Therefore, this function is not complex differentiable. Some sources state a sufficient condition for the complex differentiability at a point as, in addition to the Cauchy–Riemann equations, the partial derivatives of and be continuous at the point because this continuity condition ensures the existence of the aforementioned linear approximation. Note that it is not a necessary condition for the complex differentiability. For example, the function is complex differentiable at 0, but its real and imaginary parts have discontinuous partial derivatives there. Since complex differentiability is usually considered in an open set, where it in fact implies continuity of all partial derivatives (see below), this distinction is often elided in the literature. Independence of the complex conjugate The above proof suggests another interpretation of the Cauchy–Riemann equations. The complex conjugate of , denoted , is defined by for real variables and . Defining the two Wirtinger derivatives as the Cauchy–Riemann equations can then be written as a single equation and the complex derivative of in that case is In this form, the Cauchy–Riemann equations can be interpreted as the statement that a complex function of a complex variable is independent of the variable . As such, we can view analytic functions as true functions of one complex variable () instead of complex functions of two real variables ( and ). Physical interpretation A standard physical interpretation of the Cauchy–Riemann equations going back to Riemann's work on function theory is that u represents a velocity potential of an incompressible steady fluid flow in the plane, and v is its stream function. Suppose that the pair of (twice continuously differentiable) functions u and v satisfies the Cauchy–Riemann equations. We will take u to be a velocity potential, meaning that we imagine a flow of fluid in the plane such that the velocity vector of the fluid at each point of the plane is equal to the gradient of u, defined by By differentiating the Cauchy–Riemann equations for the functions u and v, with the symmetry of second derivatives, one shows that u solves Laplace's equation: That is, u is a harmonic function. This means that the divergence of the gradient is zero, and so the fluid is incompressible. The function v also satisfies the Laplace equation, by a similar analysis. Also, the Cauchy–Riemann equations imply that the dot product (), i.e., the direction of the maximum slope of u and that of v are orthogonal to each other. This implies that the gradient of u must point along the curves; so these are the streamlines of the flow. The curves are the equipotential curves of the flow. A holomorphic function can therefore be visualized by plotting the two families of level curves and . Near points where the gradient of u (or, equivalently, v) is not zero, these families form an orthogonal family of curves. At the points where , the stationary points of the flow, the equipotential curves of intersect. The streamlines also intersect at the same point, bisecting the angles formed by the equipotential curves. Harmonic vector field Another interpretation of the Cauchy–Riemann equations can be found in Pólya & Szegő. Suppose that u and v satisfy the Cauchy–Riemann equations in an open subset of R2, and consider the vector field regarded as a (real) two-component vector. Then the second Cauchy–Riemann equation () asserts that is irrotational (its curl is 0): The first Cauchy–Riemann equation () asserts that the vector field is solenoidal (or divergence-free): Owing respectively to Green's theorem and the divergence theorem, such a field is necessarily a conservative one, and it is free from sources or sinks, having net flux equal to zero through any open domain without holes. (These two observations combine as real and imaginary parts in Cauchy's integral theorem.) In fluid dynamics, such a vector field is a potential flow. In magnetostatics, such vector fields model static magnetic fields on a region of the plane containing no current. In electrostatics, they model static electric fields in a region of the plane containing no electric charge. This interpretation can equivalently be restated in the language of differential forms. The pair u and v satisfy the Cauchy–Riemann equations if and only if the one-form is both closed and coclosed (a harmonic differential form). Preservation of complex structure Another formulation of the Cauchy–Riemann equations involves the complex structure in the plane, given by This is a complex structure in the sense that the square of J is the negative of the 2×2 identity matrix: . As above, if u(x,y) and v(x,y) are two functions in the plane, put The Jacobian matrix of f is the matrix of partial derivatives Then the pair of functions u, v satisfies the Cauchy–Riemann equations if and only if the 2×2 matrix Df commutes with J. This interpretation is useful in symplectic geometry, where it is the starting point for the study of pseudoholomorphic curves. Other representations Other representations of the Cauchy–Riemann equations occasionally arise in other coordinate systems. If () and () hold for a differentiable pair of functions u and v, then so do for any coordinate system such that the pair is orthonormal and positively oriented. As a consequence, in particular, in the system of coordinates given by the polar representation , the equations then take the form Combining these into one equation for gives The inhomogeneous Cauchy–Riemann equations consist of the two equations for a pair of unknown functions and of two real variables for some given functions and defined in an open subset of R2. These equations are usually combined into a single equation where f = u + iv and 𝜑 = (α + iβ)/2. If 𝜑 is Ck, then the inhomogeneous equation is explicitly solvable in any bounded domain D, provided 𝜑 is continuous on the closure of D. Indeed, by the Cauchy integral formula, for all ζ ∈ D. Generalizations Goursat's theorem and its generalizations Suppose that is a complex-valued function which is differentiable as a function . Then Goursat's theorem asserts that f is analytic in an open complex domain Ω if and only if it satisfies the Cauchy–Riemann equation in the domain. In particular, continuous differentiability of f need not be assumed. The hypotheses of Goursat's theorem can be weakened significantly. If is continuous in an open set Ω and the partial derivatives of f with respect to x and y exist in Ω, and satisfy the Cauchy–Riemann equations throughout Ω, then f is holomorphic (and thus analytic). This result is the Looman–Menchoff theorem. The hypothesis that f obey the Cauchy–Riemann equations throughout the domain Ω is essential. It is possible to construct a continuous function satisfying the Cauchy–Riemann equations at a point, but which is not analytic at the point (e.g., . Similarly, some additional assumption is needed besides the Cauchy–Riemann equations (such as continuity), as the following example illustrates which satisfies the Cauchy–Riemann equations everywhere, but fails to be continuous at z = 0. Nevertheless, if a function satisfies the Cauchy–Riemann equations in an open set in a weak sense, then the function is analytic. More precisely: If is locally integrable in an open domain and satisfies the Cauchy–Riemann equations weakly, then agrees almost everywhere with an analytic function in . This is in fact a special case of a more general result on the regularity of solutions of hypoelliptic partial differential equations. Several variables There are Cauchy–Riemann equations, appropriately generalized, in the theory of several complex variables. They form a significant overdetermined system of PDEs. This is done using a straightforward generalization of the Wirtinger derivative, where the function in question is required to have the (partial) Wirtinger derivative with respect to each complex variable vanish. Complex differential forms As often formulated, the d-bar operator annihilates holomorphic functions. This generalizes most directly the formulation where Bäcklund transform Viewed as conjugate harmonic functions, the Cauchy–Riemann equations are a simple example of a Bäcklund transform. More complicated, generally non-linear Bäcklund transforms, such as in the sine-Gordon equation, are of great interest in the theory of solitons and integrable systems. Definition in Clifford algebra In the Clifford algebra , the complex number is represented as where , (, so ). The Dirac operator in this Clifford algebra is defined as . The function is considered analytic if and only if , which can be calculated in the following way: Grouping by and : Hence, in traditional notation: Conformal mappings in higher dimensions Let Ω be an open set in the Euclidean space . The equation for an orientation-preserving mapping to be a conformal mapping (that is, angle-preserving) is that where Df is the Jacobian matrix, with transpose , and I denotes the identity matrix. For , this system is equivalent to the standard Cauchy–Riemann equations of complex variables, and the solutions are holomorphic functions. In dimension , this is still sometimes called the Cauchy–Riemann system, and Liouville's theorem implies, under suitable smoothness assumptions, that any such mapping is a Möbius transformation. Lie pseudogroups One might seek to generalize the Cauchy-Riemann equations instead by asking more generally when are the solutions of a system of PDEs closed under composition. The theory of Lie Pseudogroups addresses these kinds of questions.
Mathematics
Complex analysis
null
7587
https://en.wikipedia.org/wiki/Cable%20television
Cable television
Cable television is a system of delivering television programming to consumers via radio frequency (RF) signals transmitted through coaxial cables, or in more recent systems, light pulses through fibre-optic cables. This contrasts with broadcast television, in which the television signal is transmitted over-the-air by radio waves and received by a television antenna, or satellite television, in which the television signal is transmitted over-the-air by radio waves from a communications satellite and received by a satellite dish on the roof. FM radio programming, high-speed Internet, telephone services, and similar non-television services may also be provided through these cables. Analog television was standard in the 20th century, but since the 2000s, cable systems have been upgraded to digital cable operation. A cable channel (sometimes known as a cable network) is a television network available via cable television. Many of the same channels are distributed through satellite television. Alternative terms include non-broadcast channel or programming service, the latter being mainly used in legal contexts. The abbreviation CATV is used in the US for cable television and originally stood for community antenna television, from cable television's origins in 1948; in areas where over-the-air TV reception was limited by distance from transmitters or mountainous terrain, large community antennas were constructed, and cable was run from them to individual homes. In 1968, 6.4% of Americans had cable television. The number increased to 7.5% in 1978. By 1988, 52.8% of all households were using cable. The number further increased to 62.4% in 1994. Distribution To receive cable television at a given location, cable distribution lines must be available on the local utility poles or underground utility lines. Coaxial cable brings the signal to the customer's building through a service drop, an overhead or underground cable. If the subscriber's building does not have a cable service drop, the cable company will install one. The standard cable used in the U.S. is RG-6, which has a 75 ohm impedance, and connects with a type F connector. The cable company's portion of the wiring usually ends at a distribution box on the building exterior, and built-in cable wiring in the walls usually distributes the signal to jacks in different rooms to which televisions are connected. Multiple cables to different rooms are split off the incoming cable with a small device called a splitter. There are two standards for cable television; older analog cable, and newer digital cable which can carry data signals used by digital television receivers such as high-definition television (HDTV) equipment. All cable companies in the United States have switched to or are in the course of switching to digital cable television since it was first introduced in the late 1990s. Most cable companies require a set-top box (cable converter box) or a slot on one's TV set for conditional access module cards to view their cable channels, even on newer televisions with digital cable QAM tuners, because most digital cable channels are now encrypted, or scrambled, to reduce cable service theft. A cable from the jack in the wall is attached to the input of the box, and an output cable from the box is attached to the television, usually the RF-IN or composite input on older TVs. Since the set-top box only decodes the single channel that is being watched, each television in the house requires a separate box. Some unencrypted channels, usually traditional over-the-air broadcast networks, can be displayed without a receiver box. The cable company will provide set-top boxes based on the level of service a customer purchases, from basic set-top boxes with a standard-definition picture connected through the standard coaxial connection on the TV, to high-definition wireless digital video recorder (DVR) receivers connected via HDMI or component. Older analog television sets are cable ready and can receive the old analog cable without a set-top box. To receive digital cable channels on an analog television set, even unencrypted ones, requires a different type of box, a digital television adapter supplied by the cable company or purchased by the subscriber. Another new distribution method that takes advantage of the low cost high quality DVB distribution to residential areas, uses TV gateways to convert the DVB-C, DVB-C2 stream to IP for distribution of TV over IP network in the home. Many cable companies offer internet access through DOCSIS. Principle of operation In the most common system, multiple television channels (as many as 500, although this varies depending on the provider's available channel capacity) are distributed to subscriber residences through a coaxial cable, which comes from a trunkline supported on utility poles originating at the cable company's local distribution facility, called the headend. Many channels can be transmitted through one coaxial cable by a technique called frequency division multiplexing. At the headend, each television channel is translated to a different frequency. By giving each channel a different frequency slot on the cable, the separate television signals do not interfere with each other. At an outdoor cable box on the subscriber's residence, the company's service drop cable is connected to cables distributing the signal to different rooms in the building. At each television, the subscriber's television or a set-top box provided by the cable company translates the desired channel back to its original frequency (baseband), and it is displayed onscreen. Due to widespread cable theft in earlier analog systems, the signals are typically encrypted on modern digital cable systems, and the set-top box must be activated by an activation code sent by the cable company before it will function, which is only sent after the subscriber signs up. If the subscriber fails to pay their bill, the cable company can send a signal to deactivate the subscriber's box, preventing reception. There are also usually upstream channels on the cable to send data from the customer box to the cable headend, for advanced features such as requesting pay-per-view shows or movies, cable internet access, and cable telephone service. The downstream channels occupy a band of frequencies from approximately 50 MHz to 1 GHz, while the upstream channels occupy frequencies of 5 to 42 MHz. Subscribers pay with a monthly fee. Subscribers can choose from several levels of service, with premium packages including more channels but costing a higher rate. At the local headend, the feed signals from the individual television channels are received by dish antennas from communication satellites. Additional local channels, such as local broadcast television stations, educational channels from local colleges, and community access channels devoted to local governments (PEG channels) are usually included on the cable service. Commercial advertisements for local business are also inserted in the programming at the headend (the individual channels, which are distributed nationally, also have their own nationally oriented commercials). Hybrid fiber-coaxial Modern cable systems are large, with a single network and headend often serving an entire metropolitan area. Most systems use hybrid fiber-coaxial (HFC) distribution; this means the trunklines that carry the signal from the headend to local neighborhoods are optical fiber to provide greater bandwidth and also extra capacity for future expansion. At the headend, the electrical signal is translated into an optical signal and sent through the fiber. The fiber trunkline goes to several distribution hubs, from which multiple fibers fan out to carry the signal to boxes called optical nodes in local communities. At the optical node, the optical signal is translated back into an electrical signal and carried by coaxial cable distribution lines on utility poles, from which cables branch out to a series of signal amplifiers and line extenders. These devices carry the signal to customers via passive RF devices called taps. History The very first cable networks were operated locally, notably in 1936 by Rediffusion in London in the United Kingdom and the same year in Berlin in Germany, notably for the Olympic Games, and from 1948 onwards in the United States and Switzerland. This type of local cable network was mainly used to relay terrestrial channels in geographical areas poorly served by terrestrial television signals. In the United States Cable television began in the United States as a commercial business in 1950s. The early systems simply received weak (broadcast) channels, amplified them, and sent them over unshielded wires to the subscribers, limited to a community or to adjacent communities. The receiving antenna would be taller than any individual subscriber could afford, thus bringing in stronger signals; in hilly or mountainous terrain it would be placed at a high elevation. At the outset, cable systems only served smaller communities without television stations of their own, and which could not easily receive signals from stations in cities because of distance or hilly terrain. In Canada, however, communities with their own signals were fertile cable markets, as viewers wanted to receive American signals. Rarely, as in the college town of Alfred, New York, U.S. cable systems retransmitted Canadian channels. Although early (VHF) television receivers could receive 12 channels (2–13), the maximum number of channels that could be broadcast in one city was 7: channels 2, 4, either 5 or 6, 7, 9, 11 and 13, as receivers at the time were unable to receive strong (local) signals on adjacent channels without distortion. (There were frequency gaps between 4 and 5, and between 6 and 7, which allowed both to be used in the same city). As equipment improved, all twelve channels could be utilized, except where a local VHF television station broadcast. Local broadcast channels were not usable for signals deemed to be a priority, but technology allowed low-priority signals to be placed on such channels by synchronizing their blanking intervals. TVs were unable to reconcile these blanking intervals and the slight changes due to travel through a medium, causing ghosting. The bandwidth of the amplifiers also was limited, meaning frequencies over 250 MHz were difficult to transmit to distant portions of the coaxial network, and UHF channels could not be used at all. To expand beyond 12 channels, non-standard midband channels had to be used, located between the FM band and Channel 7, or superband beyond Channel 13 up to about 300 MHz; these channels initially were only accessible using separate tuner boxes that sent the chosen channel into the TV set on Channel 2, 3 or 4. Initially, UHF broadcast stations were at a disadvantage because the standard TV sets in use at the time were unable to receive their channels. With the passage of the All-Channel Receiver Act in 1964, all new television sets were required to include a UHF tuner, nonetheless, it would still take a few years for UHF stations to become competitive. Before being added to the cable box itself, these midband channels were used for early incarnations of pay TV, e.g. The Z Channel (Los Angeles) and HBO but transmitted in the clear i.e. not scrambled as standard TV sets of the period could not pick up the signal nor could the average consumer de-tune the normal stations to be able to receive it. Once tuners that could receive select mid-band and super-band channels began to be incorporated into standard television sets, broadcasters were forced to either install scrambling circuitry or move these signals further out of the range of reception for early cable-ready TVs and VCRs. However, once consumer sets had the ability to receive all 181 FCC allocated channels, premium broadcasters were left with no choice but to scramble. The descrambling circuitry was often published in electronics hobby magazines such as Popular Science and Popular Electronics allowing anybody with anything more than a rudimentary knowledge of broadcast electronics to be able to build their own and receive the programming without cost. Later, the cable operators began to carry FM radio stations, and encouraged subscribers to connect their FM stereo sets to cable. Before stereo and bilingual TV sound became common, Pay-TV channel sound was added to the FM stereo cable line-ups. About this time, operators expanded beyond the 12-channel dial to use the midband and superband VHF channels adjacent to the high band 7–13 of North American television frequencies. Some operators as in Cornwall, Ontario, used a dual distribution network with Channels 2–13 on each of the two cables. During the 1980s, United States regulations not unlike public, educational, and government access (PEG) created the beginning of cable-originated live television programming. As cable penetration increased, numerous cable-only TV stations were launched, many with their own news bureaus that could provide more immediate and more localized content than that provided by the nearest network newscast. Such stations may use similar on-air branding as that used by the nearby broadcast network affiliate, but the fact that these stations do not broadcast over the air and are not regulated by the FCC, their call signs are meaningless. These stations evolved partially into today's over-the-air digital subchannels, where a main broadcast TV station e.g. NBC 37* would – in the case of no local CBS or ABC station being available – rebroadcast the programming from a nearby affiliate but fill in with its own news and other community programming to suit its own locale. Many live local programs with local interests were subsequently created all over the United States in most major television markets in the early 1980s. This evolved into today's many cable-only broadcasts of diverse programming, including cable-only produced television movies and miniseries. Cable specialty channels, starting with channels oriented to show movies and large sporting or performance events, diversified further, and narrowcasting became common. By the late 1980s, cable-only signals outnumbered broadcast signals on cable systems, some of which by this time had expanded beyond 35 channels. By the mid-1980s in Canada, cable operators were allowed by the regulators to enter into distribution contracts with cable networks on their own. By the 1990s, tiers became common, with customers able to subscribe to different tiers to obtain different selections of additional channels above the basic selection. By subscribing to additional tiers, customers could get specialty channels, movie channels, and foreign channels. Large cable companies used addressable descramblers to limit access to premium channels for customers not subscribing to higher tiers, however the above magazines often published workarounds for that technology as well. During the 1990s, the pressure to accommodate the growing array of offerings resulted in digital transmission that made more efficient use of the VHF signal capacity; fibre optics was common to carry signals into areas near the home, where coax could carry higher frequencies over the short remaining distance. Although for a time in the 1980s and 1990s, television receivers and VCRs were equipped to receive the mid-band and super-band channels. Due to the fact that the descrambling circuitry was for a time present in these tuners, depriving the cable operator of much of their revenue, such cable-ready tuners are rarely used now – requiring a return to the set-top boxes used from the 1970s onward. The digital television transition in the United States has put all signals, broadcast and cable, into digital form, rendering analog cable television service a rarity, found in an ever-dwindling number of markets. Analog television sets are accommodated, their tuners mostly obsolete and dependent entirely on the set-top box. Deployments by continent Cable television is mostly available in North America, Europe, Australia, Asia and South America. Cable television has had little success in Africa, as it is not cost-effective to lay cables in sparsely populated areas. Multichannel multipoint distribution service, a microwave-based system, may be used instead. Other cable-based services Coaxial cables are capable of bi-directional carriage of signals as well as the transmission of large amounts of data. Cable television signals use only a portion of the bandwidth available over coaxial lines. This leaves plenty of space available for other digital services such as cable internet, cable telephony and wireless services, using both unlicensed and licensed spectra. Broadband internet access is achieved over coaxial cable by using cable modems to convert the network data into a type of digital signal that can be transferred over coaxial cable. One problem with some cable systems is the older amplifiers placed along the cable routes are unidirectional thus in order to allow for uploading of data the customer would need to use an analog telephone modem to provide for the upstream connection. This limited the upstream speed to 31.2 Kbp/s and prevented the always-on convenience broadband internet typically provides. Many large cable systems have upgraded or are upgrading their equipment to allow for bi-directional signals, thus allowing for greater upload speed and always-on convenience, though these upgrades are expensive. In North America, Australia and Europe, many cable operators have already introduced cable telephone service, which operates just like existing fixed line operators. This service involves installing a special telephone interface at the customer's premises that converts the analog signals from the customer's in-home wiring into a digital signal, which is then sent on the local loop (replacing the analog last mile, or plain old telephone service (POTS) to the company's switching center, where it is connected to the public switched telephone network (PSTN). The biggest obstacle to cable telephone service is the need for nearly 100% reliable service for emergency calls. One of the standards available for digital cable telephony, PacketCable, seems to be the most promising and able to work with the quality of service (QOS) demands of traditional analog plain old telephone service (POTS) service. The biggest advantage to digital cable telephone service is similar to the advantage of digital cable, namely that data can be compressed, resulting in much less bandwidth used than a dedicated analog circuit-switched service. Other advantages include better voice quality and integration to a Voice over Internet Protocol (VoIP) network providing cheap or unlimited nationwide and international calling. In many cases, digital cable telephone service is separate from cable modem service being offered by many cable companies and does not rely on Internet Protocol (IP) traffic or the Internet. Traditional cable television providers and traditional telecommunication companies increasingly compete in providing voice, video and data services to residences. The combination of television, telephone and Internet access is commonly called triple play, regardless of whether CATV or telcos offer it.
Technology
Media and communication
null
7591
https://en.wikipedia.org/wiki/Cholera
Cholera
Cholera () is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea lasting a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure. Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked shellfish is a common source. Humans are the only known host for the bacteria. Risk factors for the disease include poor sanitation, insufficient clean drinking water, and poverty. Cholera can be diagnosed by a stool test, or a rapid dipstick test, although the dipstick test is less accurate. Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months, and confer the added benefit of protecting against another type of diarrhea caused by E. coli. In 2017, the US Food and Drug Administration (FDA) approved a single-dose, live, oral cholera vaccine called Vaxchora for adults aged 18–64 who are travelling to an area of active cholera transmission. It offers limited protection to young children. People who survive an episode of cholera have long-lasting immunity for at least three years (the period tested). The primary treatment for affected individuals is oral rehydration salts (ORS), the replacement of fluids and electrolytes by using slightly sweet and salty solutions. Rice-based solutions are preferred. In children, zinc supplementation has also been found to improve outcomes. In severe cases, intravenous fluids, such as Ringer's lactate, may be required, and antibiotics may be beneficial. The choice of antibiotic is aided by antibiotic sensitivity testing. Cholera continues to affect an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. To date, seven cholera pandemics have occurred, with the most recent beginning in 1961, and continuing today. The illness is rare in high-income countries, and affects children most severely. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5%, given improved treatment, but may be as high as 50% without such access to treatment. Descriptions of cholera are found as early as the 5th century BCE in Sanskrit literature. In Europe, cholera was a term initially used to describe any kind of gastroenteritis, and was not used for this disease until the early 19th century. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology because of his insights about transmission via contaminated water, and a map of the same was the first recorded incidence of epidemiological tracking. Signs and symptoms The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a person's skin may turn bluish-gray from extreme loss of fluids. Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children. Cause Transmission Cholera bacteria have been found in shellfish and plankton. Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in developing countries it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton. People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of V. cholerae in the environment. The source of the contamination is typically other people with cholera when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person. V. cholerae also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of V. cholerae. Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of V. cholerae to be cultured on standard media, a phenotype referred to as 'viable but non-culturable' (VBNC) or more conservatively 'active but non-culturable' (ABNC). One study indicates that the culturability of V. cholerae drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence. Both toxic and non-toxic strains exist. Non-toxic strains can acquire toxicity through a temperate bacteriophage. Susceptibility About 100million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals' susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to develop a severe case if they become infected. Any individual, even a healthy adult in middle age, can undergo a severe case, and each person's case should be measured by the loss of fluids, preferably in consultation with a professional health care provider. The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection. Mechanism When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive. Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins that they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place. The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ. Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless treated properly. By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants. In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine." Genetic structure Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent. Antibiotic resistance In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in in vitro studies. Diagnosis A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment via hydration and over-the-counter hydration solutions can be started without or before confirmation by laboratory analysis, especially where cholera is a common problem. Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory. Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States. Prevention The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas. Water, sanitation and hygiene Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to their nearly universal advanced water treatment and sanitation practices, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries in those areas where access to WASH (water, sanitation and hygiene) infrastructure is still inadequate. Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted: Sterilization: Proper disposal and treatment of all materials that may have come into contact with the feces of other people with cholera (e.g., clothing, bedding, etc.) are essential. These should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents. Sewage and fecal sludge management: In cholera-affected areas, sewage and fecal sludge need to be treated and managed carefully in order to stop the spread of this disease via human excreta. Provision of sanitation and hygiene is an important preventative measure. Open defecation, release of untreated sewage, or dumping of fecal sludge from pit latrines or septic tanks into the environment need to be prevented. In many cholera affected zones, there is a low degree of sewage treatment. Therefore, the implementation of dry toilets that do not contribute to water pollution, as they do not flush with water, may be an interesting alternative to flush toilets. Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use. Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g., by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases. Handwashing with soap or ash after using a toilet and before handling food or eating is also recommended for cholera prevention by WHO Africa. Surveillance Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities. Vaccination Spanish physician Jaume Ferran i Clua developed the first successful cholera inoculation in 1885, the first to immunize humans against a bacterial disease. His vaccine and inoculation was rather controversial and was rejected by his peers and several investigation commissions but it ended up demonstrating its effectiveness and being recognized for it: out of the 30 thousand people he vaccinated only 54 died. Russian-Jewish bacteriologist Waldemar Haffkine also developed a human cholera vaccine in July 1892. He conducted a massive inoculation program in British India. Persons who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested). A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole-cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective for adults aged 18–64 as a single dose. One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, , it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment. WHO recommends that oral cholera vaccination be considered in areas where the disease is endemic (with seasonal peaks), as part of the response to outbreaks, or in a humanitarian crisis during which the risk of cholera is high. OCV has been recognized as an adjunct tool for prevention and control of cholera. The WHO has prequalified three bivalent cholera vaccines—Dukoral (SBL Vaccines), containing a non-toxic B-subunit of cholera toxin and providing protection against V. cholerae O1; and two vaccines developed using the same transfer of technology—ShanChol (Shantha Biotec) and Euvichol (EuBiologics Co.), which have bivalent O1 and O139 oral killed cholera vaccines. Oral cholera vaccination could be deployed in a diverse range of situations from cholera-endemic areas and locations of humanitarian crises, but no clear consensus exists. Sari filtration Developed for use in Bangladesh, the "sari filter" is a simple and cost-effective appropriate technology method for reducing the contamination of drinking water. Used sari cloth is preferable but other types of used cloth can be used with some effect, though the effectiveness will vary significantly. Used cloth is more effective than new cloth, as the repeated washing reduces the space between the fibers. Water collected in this way has a greatly reduced pathogen count—though it will not necessarily be perfectly safe, it is an improvement for poor people with limited options. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable. Treatment Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently." Fluids The most common error in caring for patients with cholera is to underestimate the speed and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringer's lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a person's body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake. If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste. Electrolytes As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This is best done by Oral Rehydration Solution (ORS). Antibiotics Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration. Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported. Antibiotics improve outcomes in those who are both severely and not severely dehydrated. Azithromycin and tetracycline may work better than doxycycline or ciprofloxacin. Zinc supplementation In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world. Prognosis If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%. For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill. Epidemiology Cholera affects an estimated 2.8 million people worldwide, and causes approximately 95,000 deaths a year (uncertainty range: 21,000–143,000) . This occurs mainly in the developing world. In the early 1980s, death rates are believed to have still been higher than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. As of 2004, cholera remained both epidemic and endemic in many areas of the world. Recent major outbreaks are the 2010s Haiti cholera outbreak and the 2016–2022 Yemen cholera outbreak. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world". In 2019, 93% of the reported 923,037 cholera cases were from Yemen (with 1911 deaths reported). Between September 2019 and September 2020, a global total of over 450,000 cases and over 900 deaths was reported; however, the accuracy of these numbers suffer from over-reporting from countries that report suspected cases (and not laboratory confirmed cases), as well as under-reporting from countries that do not report official cases (such as Bangladesh, India and Philippines). Although much is known about the mechanisms behind the spread of cholera, researchers still do not have a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread. Bodies of water have been found to serve as a reservoir of infection, and seafood shipped long distances can spread the disease. Cholera had disappeared from the Americas for most of the 20th century, but it reappeared toward the end of that century, beginning with a severe outbreak in Peru. This was followed by the 2010s Haiti cholera outbreak and another outbreak of cholera in Haiti amid the 2018–2023 Haitian crisis. the disease is endemic in Africa and some areas of eastern and western Asia (Bangladesh, India and Yemen). Cholera is not endemic in Europe; all reported cases had a travel history to endemic areas. History of outbreaks The word cholera is from kholera from χολή kholē "bile". Cholera likely has its origins in the Indian subcontinent as evidenced by its prevalence in the region for centuries.
Biology and health sciences
Infectious disease
null
7592
https://en.wikipedia.org/wiki/Caldera
Caldera
A caldera ( ) is a large cauldron-like hollow that forms shortly after the emptying of a magma chamber in a volcanic eruption. An eruption that ejects large volumes of magma over a short period of time can cause significant detriment to the structural integrity of such a chamber, greatly diminishing its capacity to support its own roof, and any substrate or rock resting above. The ground surface then collapses into the emptied or partially emptied magma chamber, leaving a large depression at the surface (from one to dozens of kilometers in diameter). Although sometimes described as a crater, the feature is actually a type of sinkhole, as it is formed through subsidence and collapse rather than an explosion or impact. Compared to the thousands of volcanic eruptions that occur over the course of a century, the formation of a caldera is a rare event, occurring only a few times within a given window of 100 years. Only eight caldera-forming collapses are known to have occurred between 1911 and 2018, with a caldera collapse at Kīlauea, Hawaii in 2018. Volcanoes that have formed a caldera are sometimes described as "caldera volcanoes". Etymology The term caldera comes from Spanish , and Latin , meaning "cooking pot". In some texts the English term cauldron is also used, though in more recent work the term cauldron refers to a caldera that has been deeply eroded to expose the beds under the caldera floor. The term caldera was introduced into the geological vocabulary by the German geologist Leopold von Buch when he published his memoirs of his 1815 visit to the Canary Islands, where he first saw the Las Cañadas caldera on Tenerife, with Mount Teide dominating the landscape, and then the Caldera de Taburiente on La Palma. Caldera formation A collapse is triggered by the emptying of the magma chamber beneath the volcano, sometimes as the result of a large explosive volcanic eruption (see Tambora in 1815), but also during effusive eruptions on the flanks of a volcano (see Piton de la Fournaise in 2007) or in a connected fissure system (see Bárðarbunga in 2014–2015). If enough magma is ejected, the emptied chamber is unable to support the weight of the volcanic edifice above it. A roughly circular fracture, the "ring fault", develops around the edge of the chamber. Ring fractures serve as feeders for fault intrusions which are also known as ring dikes. Secondary volcanic vents may form above the ring fracture. As the magma chamber empties, the center of the volcano within the ring fracture begins to collapse. The collapse may occur as the result of a single cataclysmic eruption, or it may occur in stages as the result of a series of eruptions. The total area that collapses may be hundreds of square kilometers. Mineralization in calderas Some calderas are known to host rich ore deposits. Metal-rich fluids can circulate through the caldera, forming hydrothermal ore deposits of metals such as lead, silver, gold, mercury, lithium, and uranium. One of the world's best-preserved mineralized calderas is the Sturgeon Lake Caldera in northwestern Ontario, Canada, which formed during the Neoarchean era about 2.7 billion years ago. In the San Juan volcanic field, ore veins were emplaced in fractures associated with several calderas, with the greatest mineralization taking place near the youngest and most silicic intrusions associated with each caldera. Types of caldera Explosive caldera eruptions Explosive caldera eruptions are produced by a magma chamber whose magma is rich in silica. Silica-rich magma has a high viscosity, and therefore does not flow easily like basalt. The magma typically also contains a large amount of dissolved gases, up to 7 wt% for the most silica-rich magmas. When the magma approaches the surface of the Earth, the drop in confining pressure causes the trapped gases to rapidly bubble out of the magma, fragmenting the magma to produce a mixture of volcanic ash and other tephra with the very hot gases. The mixture of ash and volcanic gases initially rises into the atmosphere as an eruption column. However, as the volume of erupted material increases, the eruption column is unable to entrain enough air to remain buoyant, and the eruption column collapses into a tephra fountain that falls back to the surface to form pyroclastic flows. Eruptions of this type can spread ash over vast areas, so that ash flow tuffs emplaced by silicic caldera eruptions are the only volcanic product with volumes rivaling those of flood basalts. For example, when Yellowstone Caldera last erupted some 650,000 years ago, it released about 1,000 km3 of material (as measured in dense rock equivalent (DRE)), covering a substantial part of North America in up to two metres of debris. Eruptions forming even larger calderas are known, such as the La Garita Caldera in the San Juan Mountains of Colorado, where the Fish Canyon Tuff was blasted out in eruptions about 27.8 million years ago. The caldera produced by such eruptions is typically filled in with tuff, rhyolite, and other igneous rocks. The caldera is surrounded by an outflow sheet of ash flow tuff (also called an ash flow sheet). If magma continues to be injected into the collapsed magma chamber, the center of the caldera may be uplifted in the form of a resurgent dome such as is seen at the Valles Caldera, Lake Toba, the San Juan volcanic field, Cerro Galán, Yellowstone, and many other calderas. Because a silicic caldera may erupt hundreds or even thousands of cubic kilometers of material in a single event, it can cause catastrophic environmental effects. Even small caldera-forming eruptions, such as Krakatoa in 1883 or Mount Pinatubo in 1991, may result in significant local destruction and a noticeable drop in temperature around the world. Large calderas may have even greater effects. The ecological effects of the eruption of a large caldera can be seen in the record of the Lake Toba eruption in Indonesia. At some points in geological time, rhyolitic calderas have appeared in distinct clusters. The remnants of such clusters may be found in places such as the Eocene Rum Complex of Scotland, the San Juan Mountains of Colorado (formed during the Oligocene, Miocene, and Pliocene epochs) or the Saint Francois Mountain Range of Missouri (erupted during the Proterozoic eon). Valles For their 1968 paper that first introduced the concept of a resurgent caldera to geology, R.L. Smith and R.A. Bailey chose the Valles caldera as their model. Although the Valles caldera is not unusually large, it is relatively young (1.25 million years old) and unusually well preserved, and it remains one of the best studied examples of a resurgent caldera. The ash flow tuffs of the Valles caldera, such as the Bandelier Tuff, were among the first to be thoroughly characterized. Toba About 74,000 years ago, this Indonesian volcano released about dense-rock equivalent of ejecta. This was the largest known eruption during the ongoing Quaternary period (the last 2.6 million years) and the largest known explosive eruption during the last 25 million years. In the late 1990s, anthropologist Stanley Ambrose proposed that a volcanic winter induced by this eruption reduced the human population to about 2,000–20,000 individuals, resulting in a population bottleneck. More recently, Lynn Jorde and Henry Harpending proposed that the human species was reduced to approximately 5,000–10,000 people. There is no direct evidence, however, that either theory is correct, and there is no evidence for any other animal decline or extinction, even in environmentally sensitive species. There is evidence that human habitation continued in India after the eruption. Non-explosive calderas Some volcanoes, such as the large shield volcanoes Kīlauea and Mauna Loa on the island of Hawaii, form calderas in a different fashion. The magma feeding these volcanoes is basalt, which is silica poor. As a result, the magma is much less viscous than the magma of a rhyolitic volcano, and the magma chamber is drained by large lava flows rather than by explosive events. The resulting calderas are also known as subsidence calderas and can form more gradually than explosive calderas. For instance, the caldera atop Fernandina Island collapsed in 1968 when parts of the caldera floor dropped . Extraterrestrial calderas Since the early 1960s, it has been known that volcanism has occurred on other planets and moons in the Solar System. Through the use of crewed and uncrewed spacecraft, volcanism has been discovered on Venus, Mars, the Moon, and Io, a satellite of Jupiter. None of these worlds have plate tectonics, which contributes approximately 60% of the Earth's volcanic activity (the other 40% is attributed to hotspot volcanism). Caldera structure is similar on all of these planetary bodies, though the size varies considerably. The average caldera diameter on Venus is . The average caldera diameter on Io is close to , and the mode is ; Tvashtar Paterae is likely the largest caldera with a diameter of . The average caldera diameter on Mars is , smaller than Venus. Calderas on Earth are the smallest of all planetary bodies and vary from as a maximum. The Moon The Moon has an outer shell of low-density crystalline rock that is a few hundred kilometers thick, which formed due to a rapid creation. The craters of the Moon have been well preserved through time and were once thought to have been the result of extreme volcanic activity, but are currently believed to have been formed by meteorites, nearly all of which took place in the first few hundred million years after the Moon formed. Around 500 million years afterward, the Moon's mantle was able to be extensively melted due to the decay of radioactive elements. Massive basaltic eruptions took place generally at the base of large impact craters. Also, eruptions may have taken place due to a magma reservoir at the base of the crust. This forms a dome, possibly the same morphology of a shield volcano where calderas universally are known to form. Although caldera-like structures are rare on the Moon, they are not completely absent. The Compton-Belkovich Volcanic Complex on the far side of the Moon is thought to be a caldera, possibly an ash-flow caldera. Mars The volcanic activity of Mars is concentrated in two major provinces: Tharsis and Elysium. Each province contains a series of giant shield volcanoes that are similar to what we see on Earth and likely are the result of mantle hot spots. The surfaces are dominated by lava flows, and all have one or more collapse calderas. Mars has the tallest volcano in the Solar System, Olympus Mons, which is more than three times the height of Mount Everest, with a diameter of 520 km (323 miles). The summit of the mountain has six nested calderas. Venus Because there is no plate tectonics on Venus, heat is mainly lost by conduction through the lithosphere. This causes enormous lava flows, accounting for 80% of Venus' surface area. Many of the mountains are large shield volcanoes that range in size from in diameter and high. More than 80 of these large shield volcanoes have summit calderas averaging across. Io Io, unusually, is heated by solid flexing due to the tidal influence of Jupiter and Io's orbital resonance with neighboring large moons Europa and Ganymede, which keep its orbit slightly eccentric. Unlike any of the planets mentioned, Io is continuously volcanically active. For example, the NASA Voyager 1 and Voyager 2 spacecraft detected nine erupting volcanoes while passing Io in 1979. Io has many calderas with diameters tens of kilometers across. List of volcanic calderas Africa Ngorongoro Crater (Tanzania) Menengai Crater (Kenya) Mount Elgon (Uganda/Kenya) Mount Fogo (Cape Verde) Mount Longonot (Kenya) Mount Meru (Tanzania) Erta Ale (Ethiopia) Nabro Volcano (Eritrea) Mallahle (Eritrea) See Europe for calderas in the Canary Islands and the Azores Antarctica Deception Island Kemp Caldera Asia China Dakantou Caldera (大墈头) (Shanhuyan Village, Taozhu Town, Linhai, Zhejiang) Ma'anshan Caldera (马鞍山) (Shishan Town (石山镇), Xiuying, Hainan) Yiyang Caldera (宜洋) (Shuangxi Town (双溪镇宜洋村), Pingnan County, Fujian) Indonesia Batur (Bali) Krakatoa (Sunda Strait) Lake Maninjau (Sumatra) Lake Toba (Sumatra) Mount Rinjani (Lombok) Mount Tondano (Sulawesi) Mount Tambora (Sumbawa) Tengger Caldera (Java) Japan Aira Caldera (Kagoshima Prefecture) Kussharo (Hokkaido) Kuttara (Hokkaido) Mashū (Hokkaido) Aso Caldera, Mount Aso (Kumamoto Prefecture) Kikai Caldera (Kagoshima Prefecture) Towada (Aomori Prefecture) Tazawa (Akita Prefecture) Hakone (Kanagawa Prefecture) Korean Peninsula Mount Halla (Jeju-do, South Korea) Heaven Lake (Baekdu Mountain, North Korea/Changbai Mountains, China) Philippines Apolaki Caldera (Benham Rise) Corregidor Caldera (Manila Bay) Mount Pinatubo (Luzon) Taal Volcano (Luzon) Laguna Caldera (Luzon) Irosin Caldera (Luzon) Turkey Derik (Mardin) Nemrut (volcano) Russia Akademia Nauk (Kamchatka Peninsula) Golovnin (Kuril Islands) Karymsky Caldera (Kamchatka Peninsula) Karymshina (Kamchatka Peninsula) Khangar (Kamchatka Peninsula) Ksudach (Kamchatka Peninsula) Kurile Lake (Kamchatka Peninsula) Pauzhetka caldera (hosts Kurile Lake caldera, Kamchatka Peninsula) Lvinaya Past (Kuril Islands) Tao-Rusyr Caldera (Kuril Islands) Uzon (Kamchatka Peninsula) Zavaritski Caldera (Kuril Islands) Yankicha/Ushishir (Kuril Islands) Chegem Caldera (Kabardino-Balkarian Republic, North Caucasus) Europe Georgia Bakuriani/Didveli Caldera Samsari Germany Laacher See Greece Santorini Nisyros Iceland Askja Grímsvötn Bárðarbunga Katla Krafla Italy Phlegraean Fields Lake Bracciano Lake Bolsena Mount Somma which contains Mount Vesuvius Portugal Lagoa das Sete Cidades & Furnas (São Miguel, the Azores) Caldeira do Faial (Faial) Caldeirão do Corvo (Corvo) United Kingdom Glen Coe (Scotland) Scafell Caldera (Lake District, England) Slovakia Banská Štiavnica Spain Las Cañadas (Tenerife, Canary Islands) North and Central America Canada Silverthrone Caldera (British Columbia) Mount Edziza (British Columbia) Bennett Lake Volcanic Complex (British Columbia/Yukon) Mount Pleasant Caldera (New Brunswick) Sturgeon Lake Caldera (Ontario) Mount Skukum Volcanic Complex (Yukon) Blake River Megacaldera Complex (Quebec/Ontario) New Senator Caldera (Quebec) Misema Caldera (Ontario/Quebec) Noranda Caldera (Quebec) Mexico La primavera Caldera (Jalisco) Amealco Caldera (Querétaro) Las Cumbres Caldera (Veracruz-Puebla) Los Azufres Caldera (Michoacán) Los Humeros Caldera (Veracruz-Puebla) Mazahua Caldera (Mexico State) El Salvador Lake Ilopango Lake Coatepeque Guatemala Lake Amatitlán Lake Atitlán Xela Barahona Nicaragua Masaya (Nicaragua) United States Mount Aniakchak (Aniakchak National Monument and Preserve) (Alaska) Cochetopa Caldera (Colorado) Crater Lake on Mount Mazama (Crater Lake National Park, Oregon) Mount Katmai (Alaska) Kīlauea (Hawaii) Mauna Loa (Hawaii) La Garita Caldera (Colorado) Long Valley (California) Henry's Fork Caldera (Idaho) Island Park Caldera (Idaho, Wyoming) Newberry Volcano (Oregon) McDermitt Caldera (Oregon) Medicine Lake Volcano (California) Mount Okmok (Alaska) Valles Caldera (New Mexico) Yellowstone Caldera (Wyoming) Indian Ocean Cirque de Cilaos (Réunion) Cirque de Mafate (Réunion) Cirque de Salazie (Réunion) Enclos Fouqué (Réunion) Oceania Australia Cerberean Cauldron Mount Warning Prospect Hill Hawaii Kilauea (Hawaii, US) Moku‘āweoweo Caldera on Mauna Loa (Hawaii, US) New Zealand Kapenga Lake Ohakuri Lake Okataina Lake Rotorua Lake Taupō Maroa Otago Harbour Reporoa caldera Papua New Guinea Dakataua Polynesia Rano Kau (Easter Island, Chile) South America Argentina Aguas Calientes, Salta Province Caldera del Atuel, Mendoza Province Galán, Catamarca Province Bolivia Pastos Grandes Colombia Arenas crater caldera, Nevado del Ruiz volcano, Caldas Department Laguna Verde caldera, Azufral volcano, Narino Department Chile Chaitén Cordillera Nevada Caldera Laguna del Maule Pacana Caldera Sollipulli Ecuador Pululahua Geobotanical Reserve Cuicocha Quilotoa Fernandina Island, Galápagos Islands Sierra Negra (Galápagos) Chacana Caldera Extraterrestrial volcanic calderas Mars Olympus Mons caldera Venus Maat Mons caldera Erosion calderas Americas Guaichane-Mamuta (Chile) Mount Tehama (California, US) Europe Caldera de Taburiente (Spain) Oceania Tweed Valley (New South Wales, Queensland, Australia) Asia Chegem Caldera (Kabardino-Balkarian Republic, Northern Caucasus Region, Russia) Taal volcano (Philippines) Batangas Province
Physical sciences
Volcanic landforms
Earth science
7593
https://en.wikipedia.org/wiki/Calculator
Calculator
An electronic calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics. The first solid-state electronic calculator was created in the early 1960s. Pocket-sized devices became available in the 1970s, especially after the Intel 4004, the first microprocessor, was developed by Intel for the Japanese calculator company Busicom. Modern electronic calculators vary from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in printers. They became popular in the mid-1970s as the incorporation of integrated circuits reduced their size and cost. By the end of that decade, prices had dropped to the point where a basic calculator was affordable to most and they became common in schools. In addition to general purpose calculators, there are those designed for specific markets. For example, there are scientific calculators, which include trigonometric and statistical calculations. Some calculators even have the ability to do computer algebra. Graphing calculators can be used to graph functions defined on the real line, or higher-dimensional Euclidean space. , basic calculators cost little, but scientific and graphing models tend to cost more. Computer operating systems as far back as early Unix have included interactive calculator programs such as dc and hoc, and interactive BASIC could be used to do calculations on most 1970s and 1980s home computers. Calculator functions are included in most smartphones, tablets, and personal digital assistant (PDA) type devices. With the very wide availability of smartphones and the like, dedicated hardware calculators, while still widely used, are less common than they once were. In 1986, calculators still represented an estimated 41% of the world's general-purpose hardware capacity to compute information. By 2007, this had diminished to less than 0.05%. Design Input Electronic calculators contain a keyboard with buttons for digits and arithmetical operations; some even contain "00" and "000" buttons to make larger or smaller numbers easier to enter. Most basic calculators assign only one digit or operation on each button; however, in more specific calculators, a button can perform multi-function working with key combinations. Display output Calculators usually have liquid-crystal displays (LCD) as output in place of historical light-emitting diode (LED) displays and vacuum fluorescent displays (VFD); details are provided in the section Technical improvements. Large-sized figures are often used to improve readability; while using decimal separator (usually a point rather than a comma) instead of or in addition to vulgar fractions. Various symbols for function commands may also be shown on the display. Fractions such as are displayed as decimal approximations, for example rounded to . Also, some fractions (such as , which is ; to 14 significant figures) can be difficult to recognize in decimal form; as a result, many scientific calculators are able to work in vulgar fractions or mixed numbers. Memory Calculators also have the ability to save numbers into computer memory. Basic calculators usually store only one number at a time; more specific types are able to store many numbers represented in variables. Usually these variables are named ans or ans(0). The variables can also be used for constructing formulas. Some models have the ability to extend memory capacity to store more numbers; the extended memory address is termed an array index. Power source Power sources of calculators are batteries, solar cells or mains electricity (for old models), turning on with a switch or button. Some models even have no turn-off button but they provide some way to put off (for example, leaving no operation for a moment, covering solar cell exposure, or closing their lid). Crank-powered calculators were also common in the early computer era. Key layout The following keys are common to most pocket calculators. While the arrangement of the digits is standard, the positions of other keys vary from model to model; the illustration is an example. The arrangement of digits on calculator and other numeric keypads with the -- keys two rows above the -- keys is derived from calculators and cash registers. It is notably different from the layout of telephone Touch-Tone keypads which have the -- keys on top and -- keys on the third row. Internal workings In general, a basic electronic calculator consists of the following components: Power source (mains electricity, battery and/or solar cell) Keypad (input device) – consists of keys used to input numbers and function commands (addition, multiplication, square-root, etc.) Display panel (output device) – displays input numbers, commands and results. Liquid-crystal displays (LCDs), vacuum fluorescent displays (VFDs), and light-emitting diode (LED) displays use seven segments to represent each digit in a basic calculator. Advanced calculators may use dot matrix displays. A printing calculator, in addition to a display panel, has a printing unit that prints results in ink onto a roll of paper, using a printing mechanism. Processor chip (microprocessor or central processing unit). Clock rate of a processor chip refers to the frequency at which the central processing unit (CPU) is running. It is used as an indicator of the processor's speed, and is measured in clock cycles per second or hertz (Hz). For basic calculators, the speed can vary from a few hundred hertz to the kilohertz range. Example A basic explanation as to how calculations are performed in a simple four-function calculator: To perform the calculation , one presses keys in the following sequence on most calculators:     . When   is entered, it is picked up by the scanning unit; the number 25 is encoded and sent to the X register; Next, when the key is pressed, the "addition" instruction is also encoded and sent to the flag or the status register; The second number is encoded and sent to the X register. This "pushes" (shifts) the first number out into the Y register; When the key is pressed, a "message" (signal) from the flag or status register tells the permanent or non-volatile memory that the operation to be done is "addition"; The numbers in the X and Y registers are then loaded into the ALU and the calculation is carried out following instructions from the permanent or non-volatile memory; The answer, 34 is sent (shifted) back to the X register. From there, it is converted by the binary decoder unit into a decimal number (usually binary-coded decimal), and then shown on the display panel. Other functions are usually performed using repeated additions or subtractions. Numeric representation Most pocket calculators do all their calculations in binary-coded decimal (BCD) rather than binary. BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. (For example, CDs keep the track number in BCD, limiting them to 99 tracks.) The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities. Where calculators have added functions (such as square root, or trigonometric functions), software algorithms are required to produce high precision results. Sometimes significant design effort is needed to fit all the desired functions in the limited memory space available in the calculator chip, with acceptable calculation time. History Precursors to the electronic calculator The first known tools used to aid arithmetic calculations were: bones (used to tally items), pebbles, and counting boards, and the abacus, known to have been used by Sumerians and Egyptians before 2000 BC. Except for the Antikythera mechanism (an "out of the time" astronomical device), development of computing tools arrived near the start of the 17th century: the geometric-military compass (by Galileo), logarithms and Napier bones (by Napier), and the slide rule (by Edmund Gunter). The Renaissance saw the invention of the mechanical calculator by Wilhelm Schickard in 1623, and later by Blaise Pascal in 1642. A device that was at times somewhat over-promoted as being able to perform all four arithmetic operations with minimal human intervention. Pascal's calculator could add and subtract two numbers directly and thus, if the tedium could be borne, multiply and divide by repetition. Schickard's machine, constructed several decades earlier, used a clever set of mechanised multiplication tables to ease the process of multiplication and division with the adding machine as a means of completing this operation. There is a debate about whether Pascal or Shickard should be credited as the known inventor of a calculating machine due to the differences (like the different aims) of both inventions. Schickard and Pascal were followed by Gottfried Leibniz who spent forty years designing a four-operation mechanical calculator, the stepped reckoner, inventing in the process his leibniz wheel, but who couldn't design a fully operational machine. There were also five unsuccessful attempts to design a calculating clock in the 17th century. The 18th century saw the arrival of some notable improvements, first by Poleni with the first fully functional calculating clock and four-operation machine, but these machines were almost always one of a kind. Luigi Torchi invented the first direct multiplication machine in 1834: this was also the second key-driven machine in the world, following that of James White (1822). It was not until the 19th century and the Industrial Revolution that real developments began to occur. Although machines capable of performing all four arithmetic functions existed prior to the 19th century, the refinement of manufacturing and fabrication processes during the eve of the industrial revolution made large scale production of more compact and modern units possible. The Arithmometer, invented in 1820 as a four-operation mechanical calculator, was released to production in 1851 as an adding machine and became the first commercially successful unit; forty years later, by 1890, about 2,500 arithmometers had been sold plus a few hundreds more from two arithmometer clone makers (Burkhardt, Germany, 1878 and Layton, UK, 1883) and Felt and Tarrant, the only other competitor in true commercial production, had sold 100 comptometers. It wasn't until 1902 that the familiar push-button user interface was developed, with the introduction of the Dalton Adding Machine, developed by James L. Dalton in the United States. In 1921, Edith Clarke invented the "Clarke calculator", a simple graph-based calculator for solving line equations involving hyperbolic functions. This allowed electrical engineers to simplify calculations for inductance and capacitance in power transmission lines. The Curta calculator was developed in 1948 and, although costly, became popular for its portability. This purely mechanical hand-held device could do addition, subtraction, multiplication and division. By the early 1970s electronic pocket calculators ended manufacture of mechanical calculators, although the Curta remains a popular collectable item. Development of electronic calculators The first mainframe computers, initially using vacuum tubes and later transistors in the logic circuits, appeared in the 1940s and 1950s. Electronic circuits developed for computers also had application to electronic calculators. The Casio Computer Company, in Japan, released the Model 14-A calculator in 1957, which was the world's first all-electric (relatively) compact calculator. It did not use electronic logic but was based on relay technology, and was built into a desk. The IBM 608 plugboard programmable calculator was IBM's first all-transistor product, released in 1957; this was a console type system, with input and output on punched cards, and replaced the earlier, larger, vacuum-tube IBM 603. In October 1961, the world's first all-electronic desktop calculator, the British Bell Punch/Sumlock Comptometer ANITA (A New Inspiration To Arithmetic/Accounting) was announced. This machine used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. Two models were displayed, the Mk VII for continental Europe and the Mk VIII for Britain and the rest of the world, both for delivery from early 1962. The Mk VII was a slightly earlier design with a more complicated mode of multiplication, and was soon dropped in favour of the simpler Mark VIII. The ANITA had a full keyboard, similar to mechanical comptometers of the time, a feature that was unique to it and the later Sharp CS-10A among electronic calculators. The ANITA weighed roughly due to its large tube system. Bell Punch had been producing key-driven mechanical calculators of the comptometer type under the names "Plus" and "Sumlock", and had realised in the mid-1950s that the future of calculators lay in electronics. They employed the young graduate Norbert Kitz, who had worked on the early British Pilot ACE computer project, to lead the development. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology of the ANITA was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a cathode-ray tube (CRT), and introduced Reverse Polish Notation (RPN) to the calculator market for a price of $2200, which was about three times the cost of an electromechanical calculator of the time. Like Bell Punch, Friden was a manufacturer of mechanical calculators that had decided that the future lay in electronics. In 1964 more all-transistor electronic calculators were introduced: Sharp introduced the CS-10A, which weighed and cost 500,000 yen ($), and Industria Macchine Elettroniche of Italy introduced the IME 84, to which several extra keyboard and display units could be connected so that several people could make use of it (but apparently not at the same time). The Victor 3900 was the first to use integrated circuits in place of individual transistors, but production problems delayed sales until 1966. There followed a series of electronic calculator models from these and other manufacturers, including Canon, Mathatronics, Olivetti, SCM (Smith-Corona-Marchant), Sony, Toshiba, and Wang. The early calculators used hundreds of germanium transistors, which were cheaper than silicon transistors, on multiple circuit boards. Display types used were CRT, cold-cathode Nixie tubes, and filament lamps. Memory technology was usually based on the delay-line memory or the magnetic-core memory, though the Toshiba "Toscal" BC-1411 appears to have used an early form of dynamic RAM built from discrete components. Already there was a desire for smaller and less power-hungry machines. Bulgaria's ELKA 6521, introduced in 1965, was developed by the Central Institute for Calculation Technologies and built at the Elektronika factory in Sofia. The name derives from ELektronen KAlkulator, and it weighed around . It is the first calculator in the world which includes the square root function. Later that same year were released the ELKA 22 (with a luminescent display) and the ELKA 25, with an built-in printer. Several other models were developed until the first pocket model, the ELKA 101, was released in 1974. The writing on it was in Roman script, and it was exported to western countries. Programmable calculators The first desktop programmable calculators were produced in the mid-1960s. They included the Mathatronics Mathatron (1964) and the Olivetti Programma 101 (late 1965) which were solid-state, desktop, printing, floating point, algebraic entry, programmable, stored-program electronic calculators. Both could be programmed by the end user and print out their results. The Programma 101 saw much wider distribution and had the added feature of offline storage of programs via magnetic cards. Another early programmable desktop calculator (and maybe the first Japanese one) was the Casio (AL-1000) produced in 1967. It featured a nixie tubes display and had transistor electronics and ferrite core memory. The Monroe Epic programmable calculator came on the market in 1967. A large, printing, desk-top unit, with an attached floor-standing logic tower, it could be programmed to perform many computer-like functions. However, the only branch instruction was an implied unconditional branch (GOTO) at the end of the operation stack, returning the program to its starting instruction. Thus, it was not possible to include any conditional branch (IF-THEN-ELSE) logic. During this era, the absence of the conditional branch was sometimes used to distinguish a programmable calculator from a computer. The first Soviet programmable desktop calculator ISKRA 123, powered by the power grid, was released at the start of the 1970s. 1970s to mid-1980s The electronic calculators of the mid-1960s were large and heavy desktop machines due to their use of hundreds of transistors on several circuit boards with a large power consumption that required an AC power supply. There were great efforts to put the logic required for a calculator into fewer and fewer integrated circuits (chips) and calculator electronics was one of the leading edges of semiconductor development. U.S. semiconductor manufacturers led the world in large scale integration (LSI) semiconductor development, squeezing more and more functions into individual integrated circuits. This led to alliances between Japanese calculator manufacturers and U.S. semiconductor companies: Canon Inc. with Texas Instruments, Hayakawa Electric (later renamed Sharp Corporation) with North-American Rockwell Microelectronics (later renamed Rockwell International), Busicom with Mostek and Intel, and General Instrument with Sanyo. Pocket calculators By 1970, a calculator could be made using just a few chips of low power consumption, allowing portable models powered from rechargeable batteries. The first handheld calculator was a 1967 prototype called Cal Tech, whose development was led by Jack Kilby at Texas Instruments in a research project to produce a portable calculator. It could add, multiply, subtract, and divide, and its output device was a paper tape. As a result of the "Cal-Tech" project, Texas Instruments was granted master patents on portable calculators. The first commercially produced portable calculators appeared in Japan in 1970, and were soon marketed around the world. These included the Sanyo ICC-0081 "Mini Calculator", the Canon Pocketronic, and the Sharp QT-8B "micro Compet". The Canon Pocketronic was a development from the "Cal-Tech" project. It had no traditional display; numerical output was on thermal paper tape. Sharp put in great efforts in size and power reduction and introduced in January 1971 the Sharp EL-8, also marketed as the Facit 1111, which was close to being a pocket calculator. It weighed 1.59 pounds (721 grams), had a vacuum fluorescent display, rechargeable NiCad batteries, and initially sold for US$395. However, integrated circuit development efforts culminated in early 1971 with the introduction of the first "calculator on a chip", the MK6010 by Mostek, followed by Texas Instruments later in the year. Although these early hand-held calculators were very costly, these advances in electronics, together with developments in display technology (such as the vacuum fluorescent display, LED, and LCD), led within a few years to the cheap pocket calculator available to all. In 1971, Pico Electronics and General Instrument also introduced their first collaboration in ICs, a full single chip calculator IC for the Monroe Royal Digital III calculator. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. Pico and GI went on to have significant success in the burgeoning handheld calculator market. The first truly pocket-sized electronic calculator was the Busicom LE-120A "HANDY", which was marketed early in 1971. Made in Japan, this was also the first calculator to use an LED display, the first hand-held calculator to use a single integrated circuit (then proclaimed as a "calculator on a chip"), the Mostek MK6010, and the first electronic calculator to run off replaceable batteries. Using four AA-size cells the LE-120A measures . The first European-made pocket-sized calculator, DB 800 was made in May 1971 by Digitron in Buje, Croatia (former Yugoslavia) with four functions and an eight-digit display and special characters for a negative number and a warning that the calculation has too many digits to display. The first American-made pocket-sized calculator, the Bowmar 901B (popularly termed The Bowmar Brain), measuring , came out in the Autumn of 1971, with four functions and an eight-digit red LED display, for , while in August 1972 the four-function Sinclair Executive became the first slimline pocket calculator measuring and weighing . It retailed for around £79 ( at the time). By the end of the decade, similar calculators were priced less than £5 ($). Following protracted development over the course of two years including a botched partnership with Texas Instruments, Eldorado Electrodata released five pocket calculators in 1972. One called the Touch Magic was "no bigger than a pack of cigarettes" according to Administrative Management. The first Soviet Union made pocket-sized calculator, the Elektronika B3-04 was developed by the end of 1973 and sold at the start of 1974. One of the first low-cost calculators was the Sinclair Cambridge, launched in August 1973. It retailed for £29.95 ($), or £5 ($) less in kit form, and later models included some scientific functions. The Sinclair calculators were successful because they were far cheaper than the competition; however, their design led to slow and less accurate computations of transcendental functions (maximum three decimal places of accuracy). Scientific pocket calculators Meanwhile, Hewlett-Packard (HP) had been developing a pocket calculator. Launched in early 1972, it was unlike the other basic four-function pocket calculators then available in that it was the first pocket calculator with scientific functions that could replace a slide rule. The $395 HP-35, along with nearly all later HP engineering calculators, uses reverse Polish notation (RPN), also called postfix notation. A calculation like "8 plus 5" is, using RPN, performed by pressing , , , and ; instead of the algebraic infix notation: , , , . It had 35 buttons and was based on Mostek Mk6020 chip. The first Soviet scientific pocket-sized calculator the "B3-18" was completed by the end of 1975. In 1973, Texas Instruments (TI) introduced the SR-10, (SR signifying slide rule) an algebraic entry pocket calculator using scientific notation for $150. Shortly after the SR-11 featured an added key for entering pi (π). It was followed the next year by the SR-50 which added log and trig functions to compete with the HP-35, and in 1977 the mass-marketed TI-30 line which is still produced. In 1978, a new company, Calculated Industries arose which focused on specialized markets. Their first calculator, the Loan Arranger (1978) was a pocket calculator marketed to the Real Estate industry with preprogrammed functions to simplify the process of calculating payments and future values. In 1985, CI launched a calculator for the construction industry called the Construction Master which came preprogrammed with common construction calculations (such as angles, stairs, roofing math, pitch, rise, run, and feet-inch fraction conversions). This would be the first in a line of construction related calculators. Programmable pocket calculators The first programmable pocket calculator was the HP-65, in 1974; it had a capacity of 100 instructions, and could store and retrieve programs with a built-in magnetic card reader. Two years later the HP-25C introduced continuous memory, i.e., programs and data were retained in CMOS memory during power-off. In 1979, HP released the first alphanumeric, programmable, expandable calculator, the HP-41C. It could be expanded with random-access memory (RAM, for memory) and read-only memory (ROM, for software) modules, and peripherals like bar code readers, microcassette and floppy disk drives, paper-roll thermal printers, and miscellaneous communication interfaces (RS-232, HP-IL, HP-IB). The first Soviet pocket battery-powered programmable calculator, Elektronika B3-21, was developed by the end of 1976 and released at the start of 1977. The successor of B3-21, the Elektronika B3-34 wasn't backward compatible with B3-21, even if it kept the reverse Polish notation (RPN). Thus B3-34 defined a new command set, which later was used in a series of later programmable Soviet calculators. Despite very limited abilities (98 bytes of instruction memory and about 19 stack and addressable registers), people managed to write all kinds of programs for them, including adventure games and libraries of calculus-related functions for engineers. Hundreds, perhaps thousands, of programs were written for these machines, from practical scientific and business software, which were used in real-life offices and labs, to fun games for children. The Elektronika MK-52 calculator (using the extended B3-34 command set, and featuring internal EEPROM memory for storing programs and external interface for EEPROM cards and other periphery) was used in Soviet spacecraft program (for Soyuz TM-7 flight) as a backup of the board computer. This series of calculators was also noted for a large number of highly counter-intuitive mysterious undocumented features, somewhat similar to "synthetic programming" of the American HP-41, which were exploited by applying normal arithmetic operations to error messages, jumping to nonexistent addresses and other methods. A number of respected monthly publications, including the popular science magazine Nauka i Zhizn (Наука и жизнь, Science and Life), featured special columns, dedicated to optimization methods for calculator programmers and updates on undocumented features for hackers, which grew into a whole esoteric science with many branches, named "yeggogology" ("еггогология"). The error messages on those calculators appear as a Russian word "YEGGOG" ("ЕГГОГ") which, unsurprisingly, is translated to "Error". A similar hacker culture in the US revolved around the HP-41, which was also noted for a large number of undocumented features and was much more powerful than B3-34. Technical improvements Through the 1970s the hand-held electronic calculator underwent rapid development. The red LED and blue/green vacuum fluorescent displays consumed a lot of power and the calculators either had a short battery life (often measured in hours, so rechargeable nickel-cadmium batteries were common) or were large so that they could take larger, higher capacity batteries. In the early 1970s liquid-crystal displays (LCDs) were in their infancy and there was a great deal of concern that they only had a short operating lifetime. Busicom introduced the Busicom LE-120A "HANDY" calculator, the first pocket-sized calculator and the first with an LED display, and announced the Busicom LC with LCD. However, there were problems with this display and the calculator never went on sale. The first successful calculators with LCDs were manufactured by Rockwell International and sold from 1972 by other companies under such names as: Dataking LC-800, Harden DT/12, Ibico 086, Lloyds 40, Lloyds 100, Prismatic 500 (a.k.a. P500), Rapid Data Rapidman 1208LC. The LCDs were an early form using the Dynamic Scattering Mode DSM with the numbers appearing as bright against a dark background. To present a high-contrast display these models illuminated the LCD using a filament lamp and solid plastic light guide, which negated the low power consumption of the display. These models appear to have been sold only for a year or two. A more successful series of calculators using a reflective DSM-LCD was launched in 1972 by Sharp Inc with the Sharp EL-805, which was a slim pocket calculator. This, and another few similar models, used Sharp's Calculator On Substrate (COS) technology. An extension of one glass plate needed for the liquid crystal display was used as a substrate to mount the needed chips based on a new hybrid technology. The COS technology may have been too costly since it was only used in a few models before Sharp reverted to conventional circuit boards. In the mid-1970s the first calculators appeared with field-effect, twisted nematic (TN) LCDs with dark numerals against a grey background, though the early ones often had a yellow filter over them to cut out damaging ultraviolet rays. The advantage of LCDs is that they are passive light modulators reflecting light, which require much less power than light-emitting displays such as LEDs or VFDs. This led the way to the first credit-card-sized calculators, such as the Casio Mini Card LC-78 of 1978, which could run for months of normal use on button cells. There were also improvements to the electronics inside the calculators. All of the logic functions of a calculator had been squeezed into the first "calculator on a chip" integrated circuits (ICs) in 1971, but this was leading edge technology of the time and yields were low and costs were high. Many calculators continued to use two or more ICs, especially the scientific and the programmable ones, into the late 1970s. The power consumption of the integrated circuits was also reduced, especially with the introduction of CMOS technology. Appearing in the Sharp "EL-801" in 1972, the transistors in the logic cells of CMOS ICs only used any appreciable power when they changed state. The LED and VFD displays often required added driver transistors or ICs, whereas the LCDs were more amenable to being driven directly by the calculator IC itself. With this low power consumption came the possibility of using solar cells as the power source, realised around 1978 by calculators such as the Royal Solar 1, Sharp EL-8026, and Teal Photon. Mass-market phase At the start of the 1970s, hand-held electronic calculators were very costly, at two or three weeks' wages, and so were a luxury item. The high price was due to their construction requiring many mechanical and electronic components which were costly to produce, and production runs that were too small to exploit economies of scale. Many firms saw that there were good profits to be made in the calculator business with the margin on such high prices. However, the cost of calculators fell as components and their production methods improved, and the effect of economies of scale was felt. By 1976, the cost of the cheapest four-function pocket calculator had dropped to a few dollars, about 1/20 of the cost five years before. The results of this were that the pocket calculator was affordable, and that it was now difficult for the manufacturers to make a profit from calculators, leading to many firms dropping out of the business or closing. The firms that survived making calculators tended to be those with high outputs of higher quality calculators, or producing high-specification scientific and programmable calculators. Mid-1980s to present The first calculator capable of symbolic computing was the HP-28C, released in 1987. It could, for example, solve quadratic equations symbolically. The first graphing calculator was the Casio fx-7000G released in 1985. The two leading manufacturers, HP and TI, released increasingly feature-laden calculators during the 1980s and 1990s. At the turn of the millennium, the line between a graphing calculator and a handheld computer was not always clear, as some very advanced calculators such as the TI-89, the Voyage 200 and HP-49G could differentiate and integrate functions, solve differential equations, run word processing and PIM software, and connect by wire or IR to other calculators/computers. The HP 12c financial calculator is still produced. It was introduced in 1981 and is still being made with few changes. The HP 12c featured the reverse Polish notation mode of data entry. In 2003 several new models were released, including an improved version of the HP 12c, the "HP 12c platinum edition" which added more memory, more built-in functions, and the addition of the algebraic mode of data entry. Calculated Industries competed with the HP 12c in the mortgage and real estate markets by differentiating the key labeling; changing the "I", "PV", "FV" to easier labeling terms such as "Int", "Term", "Pmt", and not using the reverse Polish notation. However, CI's more successful calculators involved a line of construction calculators, which evolved and expanded in the 1990s to present. According to Mark Bollman, a mathematics and calculator historian and associate professor of mathematics at Albion College, the "Construction Master is the first in a long and profitable line of CI construction calculators" which carried them through the 1980s, 1990s, and to the present. Use in education In most countries, students use calculators for schoolwork. There was some initial resistance to the idea out of fear that basic or elementary arithmetic skills would suffer. There remains disagreement about the importance of the ability to perform calculations in the head, with some curricula restricting calculator use until a certain level of proficiency has been obtained, while others concentrate more on teaching estimation methods and problem-solving. Research suggests that inadequate guidance in the use of calculating tools can restrict the kind of mathematical thinking that students engage in. Others have argued that calculator use can even cause core mathematical skills to atrophy, or that such use can prevent understanding of advanced algebraic concepts. In December 2011 the UK's Minister of State for Schools, Nick Gibb, voiced concern that children can become "too dependent" on the use of calculators. As a result, the use of calculators is to be included as part of a review of the Curriculum. In the United States, many math educators and boards of education have enthusiastically endorsed the National Council of Teachers of Mathematics (NCTM) standards and actively promoted the use of classroom calculators from kindergarten through high school. Calculators may in some circumstances be used within school and college examinations. In the United Kingdom there are limitations on the type of calculator which may be used in an examination to avoid malpractice. Some calculators which offer additional functionality have an "exam mode" setting which makes them compliant with examination regulations. Personal computers Personal computers often come with a calculator utility program that emulates the appearance and functions of a calculator, using the graphical user interface to portray a calculator. Examples include the Windows Calculator, Apple's Calculator, and KDE's KCalc. Most personal data assistants (PDAs) and smartphones also have such a feature. Calculators compared to computers The fundamental difference between a calculator and computer is that a computer can be programmed in a way that allows the program to take different branches according to intermediate results, while calculators are pre-designed with specific functions (such as addition, multiplication, and logarithms) built in. The distinction is not clear-cut: some devices classed as programmable calculators have programming functions, sometimes with support for programming languages (such as RPL or TI-BASIC). For instance, instead of a hardware multiplier, a calculator might implement floating point mathematics with code in read-only memory (ROM), and compute trigonometric functions with the CORDIC algorithm because CORDIC does not require much multiplication. Bit serial logic designs are more common in calculators whereas bit parallel designs dominate general-purpose computers, because a bit serial design minimizes chip complexity, but takes many more clock cycles. This distinction blurs with high-end calculators, which use processor chips associated with computer and embedded systems design, more so the Z80, MC68000, and ARM architectures, and some custom designs specialized for the calculator market.
Technology
Basics_3
null
7626
https://en.wikipedia.org/wiki/Cetacea
Cetacea
Cetacea (; , ) is an infraorder of aquatic mammals belonging to the order Artiodactyla that includes whales, dolphins and porpoises. Key characteristics are their fully aquatic lifestyle, streamlined body shape, often large size and exclusively carnivorous diet. They propel themselves through the water with powerful up-and-down movement of their tail which ends in a paddle-like fluke, using their flipper-shaped forelimbs to maneuver. While the majority of cetaceans live in marine environments, a small number reside solely in brackish water or fresh water. Having a cosmopolitan distribution, they can be found in some rivers and all of Earth's oceans, and many species inhabit vast ranges where they migrate with the changing of the seasons. Cetaceans are famous for their high intelligence, complex social behaviour, and the enormous size of some of the group's members. For example, the blue whale reaches a maximum confirmed length of and a weight of 173 tonnes (190 short tons), making it the largest animal ever known to have existed. There are approximately 89 living species split into two parvorders: Odontoceti or toothed whales (containing porpoises, dolphins, other predatory whales like the beluga and the sperm whale, and the poorly understood beaked whales) and the filter feeding Mysticeti or baleen whales (which includes species like the blue whale, the humpback whale and the bowhead whale). Despite their highly modified bodies and carnivorous lifestyle, genetic and fossil evidence places cetaceans as nested within even-toed ungulates, most closely related to hippopotamus within the clade Whippomorpha. Cetaceans have been extensively hunted for their meat, blubber and oil by commercial operations. Although the International Whaling Commission has agreed on putting a halt to commercial whaling, whale hunting is still going on, either under IWC quotas to assist the subsistence of Arctic native people or in the name of scientific research, although a large spectrum of non-lethal methods are now available to study marine mammals in the wild. Cetaceans also face severe environmental hazards from underwater noise pollution, entanglement in abandoned ropes and nets, collisions with ships, plastic and heavy metals build-up, to accelerating climate change, but how much they are affected varies widely from species to species, from minimally in the case of the southern bottlenose whale to the baiji (Chinese river dolphin) which is considered to be functionally extinct due to human activity. Baleen whales and toothed whales The two parvorders, baleen whales (Mysticeti) and toothed whales (Odontoceti), are thought to have diverged around thirty-four million years ago. Baleen whales have bristles made of keratin instead of teeth. The bristles filter krill and other small invertebrates from seawater. Grey whales feed on bottom-dwelling mollusks. Rorqual family (balaenopterids) use throat pleats to expand their mouths to take in food and sieve out the water. Balaenids (right whales and bowhead whales) have massive heads that can make up 40% of their body mass. Most mysticetes prefer the food-rich colder waters of the Northern and Southern Hemispheres, migrating to the Equator to give birth. During this process, they are capable of fasting for several months, relying on their fat reserves. The parvorder of Odontocetes – the toothed whales – include sperm whales, beaked whales, orcas, dolphins and porpoises. Generally their teeth have evolved to catch fish, squid or other marine invertebrates, not for chewing them, so prey is swallowed whole. Teeth are shaped like cones (dolphins and sperm whales), spades (porpoises), pegs (belugas), tusks (narwhals) or variable (beaked whale males). Female beaked whales' teeth are hidden in the gums and are not visible, and most male beaked whales have only two short tusks. Narwhals have vestigial teeth other than their tusk, which is present on males and 15% of females and has millions of nerves to sense water temperature, pressure and salinity. A few toothed whales, such as some orcas, feed on mammals, such as pinnipeds and other whales. Toothed whales have well-developed senses – their eyesight and hearing are adapted for both air and water, and they have advanced sonar capabilities using their melon. Their hearing is so well-adapted for both air and water that some blind specimens can survive. Some species, such as sperm whales, are well adapted for diving to great depths. Several species of toothed whales show sexual dimorphism, in which the males differ from the females, usually for purposes of sexual display or aggression. Anatomy Cetacean bodies are generally similar to those of fish, which can be attributed to their lifestyle and the habitat conditions. Their body is well-adapted to their habitat, although they share essential characteristics with other higher mammals (Eutheria). They have a streamlined shape, and their forelimbs are flippers. Almost all have a dorsal fin on their backs, but this can take on many forms, depending on the species. A few species, such as the beluga whale, lack them. Both the flipper and the fin are for stabilization and steering in the water. The male genitals and the mammary glands of females are sunken into the body. The male genitals are attached to a vestigial pelvis. The body is wrapped in a thick layer of fat, known as blubber. This provides thermal insulation and gives cetaceans their smooth, streamlined body shape. In larger species, it can reach a thickness up to . Sexual dimorphism evolved in many toothed whales. Sperm whales, narwhals, many members of the beaked whale family, several species of the porpoise family, orcas, pilot whales, eastern spinner dolphins and northern right whale dolphins show this characteristic. Males in these species developed external features absent in females that are advantageous in combat or display. For example, male sperm whales are up to 63% percent larger than females, and many beaked whales possess tusks used in competition among males. Hind legs are not present in cetaceans, nor are any other external body attachments such as a pinna and hair. Head Whales have an elongated head, especially baleen whales, due to the wide overhanging jaw. Bowhead whale plates can be long. Their nostril(s) make up the blowhole, with one in toothed whales and two in baleen whales. The nostrils are located on top of the head above the eyes so that the rest of the body can remain submerged while surfacing for air. The back of the skull is significantly shortened and deformed. By shifting the nostrils to the top of the head, the nasal passages extend perpendicularly through the skull. The teeth or baleen in the upper jaw sit exclusively on the maxilla. The braincase is concentrated through the nasal passage to the front and is correspondingly higher, with individual cranial bones that overlap. In toothed whales, connective tissue exists in the melon as a head buckle. This is filled with air sacs and fat that aid in buoyancy and biosonar. The sperm whale has a particularly pronounced melon; this is called the spermaceti organ and contains the eponymous spermaceti, hence the name "sperm whale". Even the long tusk of the narwhal is a vice-formed tooth. In many toothed whales, the depression in their skull is due to the formation of a large melon and multiple, asymmetric air bags. River dolphins, unlike most other cetaceans, can turn their head 90°. Most other cetaceans have fused neck vertebrae and are unable to turn their head at all. The baleen of baleen whales consists of long, fibrous strands of keratin. Located in place of the teeth, it has the appearance of a huge fringe and is used to sieve the water for plankton and krill. Brain Sperm whales have the largest brain mass of any animal on Earth, averaging and in mature males. The brain to body mass ratio in some odontocetes, such as belugas and narwhals, is second only to humans. In some whales, however, it is less than half that of humans: 0.9% versus 2.1%. In cetaceans, evolution in the water has caused changes to the head that have modified brain shape such that the brain folds around the insula and expands more laterally than in terrestrial mammals. As a result, the cetacean prefrontal cortex (compared to that in humans) rather than frontal is laterally positioned. Brain size was previously considered a major indicator of intelligence. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for cognitive tasks. Allometric analysis of the relationship between mammalian brain mass (weight) and body mass for different species of mammals shows that larger species generally have larger brains. However, this increase is not fully proportional. Typically the brain mass only increases in proportion to somewhere between the two-thirds power (or the square of the cube root) and the three-quarters power (or the cube of the fourth root) of the body mass. mbrain ∝ (mbody)k where k is between two-thirds and three-quarters. Thus if Species B is twice the size of Species A, its brain size will typically be somewhere between 60% and 70% higher. Comparison of a particular animal's brain size with the expected brain size based on such an analysis provides an encephalization quotient that can be used as an indication of animal intelligence. The neocortex of many cetaceans is home to elongated spindle neurons that, prior to 2019, were known only in hominids. In humans, these cells are thought to be involved in social conduct, emotions, judgment and theory of mind. Cetacean spindle neurons are found in areas of the brain homologous to where they are found in humans, suggesting they perform a similar function. Skeleton The cetacean skeleton is largely made up of cortical bone, which stabilizes the animal in the water. For this reason, the usual terrestrial compact bones, which are finely woven cancellous bone, are replaced with lighter and more elastic material. In many places, bone elements are replaced by cartilage and even fat, thereby improving their hydrostatic qualities. The ear and the muzzle contain a bone shape that is exclusive to cetaceans with a high density, resembling porcelain. This conducts sound better than other bones, thus aiding biosonar. The number of vertebrae that make up the spine varies by species, ranging from forty to ninety-three. The cervical spine, found in all mammals, consists of seven vertebrae which, however, are reduced or fused. This fusion provides stability during swimming at the expense of mobility. The fins are carried by the thoracic vertebrae, ranging from nine to seventeen individual vertebrae. The sternum is cartilaginous. The last two to three pairs of ribs are not connected and hang freely in the body wall. The stable lumbar and tail include the other vertebrae. Below the caudal vertebrae is the chevron bone. The front limbs are paddle-shaped with shortened arms and elongated finger bones, to support movement. They are connected by cartilage. The second and third fingers display a proliferation of the finger members, a so-called hyperphalangy. The shoulder joint is the only functional joint in all cetaceans except for the Amazon river dolphin. The collarbone is completely absent. Fluke Cetaceans have a cartilaginous fluke at the end of their tails that is used for propulsion. The fluke is set horizontally on the body and used with vertical movements, unlike fish and ichthyosaurs, which have vertical tails which move horizontally. Physiology Circulation Cetaceans have powerful hearts. Blood oxygen is distributed effectively throughout the body. They are warm-blooded, i.e., they hold a nearly constant body temperature. Respiration Cetaceans have lungs, meaning they breathe air. An individual can last without a breath from a few minutes to over two hours depending on the species. Cetacea are deliberate breathers who must be awake to inhale and exhale. When stale air, warmed from the lungs, is exhaled, it condenses as it meets colder external air. As with a terrestrial mammal breathing out on a cold day, a small cloud of 'steam' appears. This is called the 'spout' and varies across species in shape, angle and height. Species can be identified at a distance using this characteristic. The structure of the respiratory and circulatory systems is of particular importance for the life of marine mammals. The oxygen balance is effective. Each breath can replace up to 90% of the total lung volume. For land mammals, in comparison, this value is usually about 15%. During inhalation, about twice as much oxygen is absorbed by the lung tissue as in a land mammal. As with all mammals, the oxygen is stored in the blood and the lungs, but in cetaceans, it is also stored in various tissues, mainly in the muscles. The muscle pigment, myoglobin, provides an effective bond. This additional oxygen storage is vital for deep diving, since beyond a depth around , the lung tissue is almost completely compressed by the water pressure. Abdominal organs The stomach consists of three chambers. The first region is formed by a loose gland and a muscular forestomach (missing in beaked whales); this is followed by the main stomach and the pylorus. Both are equipped with glands to help digestion. A bowel adjoins the stomachs, whose individual sections can only be distinguished histologically. The liver is large and separate from the gall bladder. The kidneys are long and flattened. The salt concentration in cetacean blood is lower than that in seawater, requiring kidneys to excrete salt. This allows the animals to drink seawater. The urinary bladder is proportionally smaller in cetaceans than in land mammals. The testes are located internally, without an external scrotum. The uterus is bicornuate. Senses Cetacean eyes are set on the sides rather than the front of the head. This means only species with pointed 'beaks' (such as dolphins) have good binocular vision forward and downward. Tear glands secrete greasy tears, which protect the eyes from the salt in the water. The lens is almost spherical, which is most efficient at focusing the minimal light that reaches deep water. Odontocetes have little to no ability to taste or smell, while mysticetes are believed to have some ability to smell because of their reduced, but functional olfactory system. Cetaceans are known to possess excellent hearing. At least one species, the tucuxi or Guiana dolphin, is able to use electroreception to sense prey. Ears The external ear has lost the pinna (visible ear), but still retains a narrow ear canal. The three small bones or ossicles that transmit sound within each ear are dense and compact, and differently shaped from those of land mammals. The semicircular canals are much smaller relative to body size than in other mammals. A bony structure of the middle and inner ear, the auditory bulla, is composed of two compact and dense bones (the periotic and tympanic). It is housed in a cavity in the middle ear; in the Odontoceti (apart from in the physeterids, this cavity is filled with dense foam and completely surrounds the bulla, which is connected to the skull only by ligaments. This may isolate the ear from sounds transmitted through the bones of the skull, something that also happens in bats. Cetaceans use sound to communicate, using groans, moans, whistles, clicks or the 'singing' of the humpback whale. Echolocation Odontoceti are generally capable of echolocation. They can discern the size, shape, surface characteristics, distance and movement of an object. They can search for, chase and catch fast-swimming prey in total darkness. Most Odontoceti can distinguish between prey and nonprey (such as humans or boats); captive Odontoceti can be trained to distinguish between, for example, balls of different sizes or shapes. Echolocation clicks also contain characteristic details unique to each animal, which may suggest that toothed whales can discern between their own click and that of others. While differences in ear structure associated with echolocating abilities are found amongst Cetacea, cranial asymmetry has also been found to be a factor in the ability to produce sounds used in echolocation. Mysticeti, who don't have the ability to echolocate, possess general symmetry of the skull and facial region, while Odontoceti display a nasofacial asymmetry that is linked to their echolocating abilities. Differences in the level of asymmetry also seem to correlate with differences in the types of sounds produced. Mysticeti have exceptionally thin, wide basilar membranes in their cochleae without stiffening agents, making their ears adapted for processing low to infrasonic frequencies. Chromosomes The initial karyotype includes a set of chromosomes from 2n = 44. They have four pairs of telocentric chromosomes (whose centromeres sit at one of the telomeres), two to four pairs of subtelocentric and one or two large pairs of submetacentric chromosomes. The remaining chromosomes are metacentric—the centromere is approximately in the middle—and are rather small. All cetaceans have chromosomes 2n = 44, except the sperm whales and pygmy sperm whales, which have 2n = 42. Ecology Range and habitat Cetaceans are found in many aquatic habitats. While many marine species, such as the blue whale, the humpback whale and the orca, have a distribution area that includes nearly the entire ocean, some species occur only locally or in broken populations. These include the vaquita, which inhabits a small part of the Gulf of California and Hector's dolphin, which lives in some coastal waters in New Zealand. Most river dolphin species live exclusively in fresh water. Many species inhabit specific latitudes, often in tropical or subtropical waters, such as Bryde's whale or Risso's dolphin. Others are found only in a specific body of water. The southern right whale dolphin and the hourglass dolphin live only in the Southern Ocean. The narwhal and the beluga live only in the Arctic Ocean. Sowerby's beaked whale and the Clymene dolphin exist only in the Atlantic and the Pacific white-sided dolphin and the northern straight dolphin live only in the North Pacific. Cosmopolitan species may be found in the Pacific, Atlantic and Indian Oceans. However, northern and southern populations become genetically separated over time. In some species, this separation leads eventually to a divergence of the species, such as produced the southern right whale, North Pacific right whale and North Atlantic right whale. Migratory species' reproductive sites often lie in the tropics and their feeding grounds in polar regions. Thirty-two species are found in European waters, including twenty-five toothed and seven baleen species. Whale migration Many species of whales migrate on a latitudinal basis to move between seasonal habitats. For example, the gray whale migrates round trip. The journey begins at winter birthing grounds in warm lagoons along Baja California, and traverses of coastline to summer feeding grounds in the Bering, Chuckchi and Beaufort seas off the coast of Alaska. Behaviour Sleep Conscious breathing cetaceans sleep but cannot afford to be unconscious for long, because they may drown. While knowledge of sleep in wild cetaceans is limited, toothed cetaceans in captivity have been recorded to exhibit unihemispheric slow-wave sleep (USWS), which means they sleep with one side of their brain at a time, so that they may swim, breathe consciously and avoid both predators and social contact during their period of rest. A 2008 study found that sperm whales sleep in vertical postures just under the surface in passive shallow 'drift-dives', generally during the day, during which whales do not respond to passing vessels unless they are in contact, leading to the suggestion that whales possibly sleep during such dives. Diving While diving, the animals reduce their oxygen consumption by lowering the heart activity and blood circulation; individual organs receive no oxygen during this time. Some rorquals can dive for up to 40 minutes, sperm whales between 60 and 90 minutes and bottlenose whales for two hours. Diving depths average about . Species such as sperm whales can dive to , although more commonly . Social relations Most cetaceans are social animals, although a few species live in pairs or are solitary. A group, known as a pod, usually consists of ten to fifty animals, but on occasion, such as mass availability of food or during mating season, groups may encompass more than one thousand individuals. Inter-species socialization can occur. Pods have a fixed hierarchy, with the priority positions determined by biting, pushing or ramming. The behavior in the group is aggressive only in situations of stress such as lack of food, but usually it is peaceful. Contact swimming, mutual fondling and nudging are common. The playful behavior of the animals, which is manifested in air jumps, somersaults, surfing, or fin hitting, occurs more often than not in smaller cetaceans, such as dolphins and porpoises. Whale song Males in some baleen species communicate via whale song, sequences of high pitched sounds. These "songs" can be heard for hundreds of kilometers. Each population generally shares a distinct song, which evolves over time. Sometimes, an individual can be identified by its distinctive vocals, such as the 52-hertz whale that sings at a higher frequency than other whales. Some individuals are capable of generating over 600 distinct sounds. In baleen species such as humpbacks, blues and fins, male-specific song is believed to be used to attract and display fitness to females. Hunting Pod groups also hunt, often with other species. Many species of dolphins accompany large tunas on hunting expeditions, following large schools of fish. The orca hunts in pods and targets belugas and even larger whales. Humpback whales, among others, form in collaboration bubble carpets to herd krill or plankton into bait balls before lunging at them. Intelligence Cetacea are known to teach, learn, cooperate, scheme and grieve. Smaller cetaceans, such as dolphins and porpoises, engage in complex play behavior, including such things as producing stable underwater toroidal air-core vortex rings or "bubble rings". The two main methods of bubble ring production are rapid puffing of air into the water and allowing it to rise to the surface, forming a ring, or swimming repeatedly in a circle and then stopping to inject air into the helical vortex currents thus formed. They also appear to enjoy biting the vortex rings, so that they burst into many separate bubbles and then rise quickly to the surface. Whales produce bubble nets to aid in herding prey. Larger whales are also thought to engage in play. The southern right whale elevates its tail fluke above the water, remaining in the same position for a considerable time. This is known as "sailing". It appears to be a form of play and is most commonly seen off the coast of Argentina and South Africa. Humpback whales also display this behaviour. Self-awareness appears to be a sign of abstract thinking. Self-awareness, although not well-defined, is believed to be a precursor to more advanced processes such as metacognitive reasoning (thinking about thinking) that humans exploit. Dolphins appear to possess self-awareness. The most widely used test for self-awareness in animals is the mirror test, in which a temporary dye is placed on an animal's body and the animal is then presented with a mirror. Researchers then explore whether the animal shows signs of self-recognition. Critics claim that the results of these tests are susceptible to the Clever Hans effect. This test is much less definitive than when used for primates. Primates can touch the mark or the mirror, while dolphins cannot, making their alleged self-recognition behavior less certain. Skeptics argue that behaviors said to identify self-awareness resemble existing social behaviors, so researchers could be misinterpreting self-awareness for social responses. Advocates counter that the behaviors are different from normal responses to another individual. Dolphins show less definitive behavior of self-awareness, because they have no pointing ability. In 1995, Marten and Psarakos used video to test dolphin self-awareness. They showed dolphins real-time footage of themselves, recorded footage and another dolphin. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been replicated, dolphins later "passed" the mirror test. Decision-making Collective decisions are an important part of life as a cetacean for the many species that spend time in groups (whether these be temporary such as the fission-fusion dynamics of many smaller dolphin species or long-term stable associations as are seen in killer whale and sperm whale matrilines). Little is known about how these decisions work, though studies have found evidence messy consensus decisions in groups of sperm whales and leadership in other species like bottlenose dolphins and killer whales. Life history Reproduction and brooding Most cetaceans sexually mature at seven to 10 years. An exception to this is the La Plata dolphin, which is sexually mature at two years, but lives only to about 20. The sperm whale reaches sexual maturity within about 20 years and has a lifespan between 50 and 100 years. For most species, reproduction is seasonal. Ovulation coincides with male fertility. This cycle is usually coupled with seasonal movements that can be observed in many species. Most toothed whales have no fixed bonds. In many species, females choose several partners during a season. Baleen whales are largely monogamous within each reproductive period. Gestation ranges from 9 to 16 months. Duration is not necessarily a function of size. Porpoises and blue whales gestate for about 11 months. As with all mammals other than marsupials and monotremes, the embryo is fed by the placenta, an organ that draws nutrients from the mother's bloodstream. Mammals without placentas either lay minuscule eggs (monotremes) or bear minuscule offspring (marsupials). Cetaceans usually bear one calf. In the case of twins, one usually dies, because the mother cannot produce sufficient milk for both. In modern cetaceans, the fetus is usually positioned for a tail-first delivery. Contrary to popular belief, this is not to minimize the risk of drowning during delivery. More likely it has to do with the mechanics of birthing and the shape of the fetus. After birth, the mother carries the infant to the surface for its first breath. At birth, they are about one-third of their adult length and tend to be independently active, comparable to terrestrial mammals. Suckling Like other placental mammals, cetaceans give birth to well-developed calves and nurse them with milk from their mammary glands. When suckling, the mother actively splashes milk into the mouth of the calf, using the muscles of her mammary glands, as the calf has no lips. This milk usually has a high-fat content, ranging from 16 to 46%, causing the calf to increase rapidly in size and weight. In many small cetaceans, suckling lasts for about four months. In large species, it lasts for over a year and involves a strong bond between mother and offspring. The mother is solely responsible for brooding. In some species, so-called "aunts" occasionally suckle the young. This reproductive strategy provides a few offspring that have a high survival rate. Lifespan Among cetaceans, whales are distinguished by an unusual longevity compared to other higher mammals. Some species, such as the bowhead whale (Balaena mysticetus), can reach over 200 years. Based on the annual rings of the bony otic capsule, the age of the oldest known specimen is a male determined to be 211 years at the time of death. Death Upon death, whale carcasses fall to the deep ocean and provide a substantial habitat for marine life. Evidence of whale falls in present-day and fossil records shows that deep-sea whale falls support a rich assemblage of creatures, with a global diversity of 407 species, comparable to other neritic biodiversity hotspots, such as cold seeps and hydrothermal vents. Deterioration of whale carcasses happens through three stages. Initially, organisms such as sharks and hagfish scavenge the soft tissues at a rapid rate over a period of months and as long as two years. This is followed by the colonization of bones and surrounding sediments (which contain organic matter) by enrichment opportunists, such as crustaceans and polychaetes, throughout a period of years. Finally, sulfophilic bacteria reduce the bones releasing hydrogen sulfide enabling the growth of chemoautotrophic organisms, which in turn, support organisms such as mussels, clams, limpets and sea snails. This stage may last for decades and supports a rich assemblage of species, averaging 185 per site. Disease Brucellosis affects almost all mammals. It is distributed worldwide, while fishing and pollution have caused porpoise population density pockets, which risks further infection and disease spreading. Brucella ceti, most prevalent in dolphins, has been shown to cause chronic disease, increasing the chance of failed birth and miscarriages, male infertility, neurobrucellosis, cardiopathies, bone and skin lesions, strandings and death. Until 2008, no case had ever been reported in porpoises, but isolated populations have an increased risk and consequentially a high mortality rate. Evolution Fossil history Origins The direct ancestors of today's cetaceans are probably found within the Dorudontidae whose most famous member, Dorudon, lived at the same time as Basilosaurus. Both groups had already developed some of the typical anatomical features of today's whales, such as the fixed bulla, which replaces the mammalian eardrum, as well as sound-conducting elements for submerged directional hearing. Their wrists were stiffened and probably contributed to the typical build of flippers. The hind legs existed, however, but were significantly reduced in size and with a vestigial pelvis connection. Transition from land to sea The fossil record traces the gradual transition from terrestrial to aquatic life. The regression of the hind limbs allowed greater flexibility of the spine. This made it possible for whales to move around with the vertical tail hitting the water. The front legs transformed into flippers, costing them their mobility on land. One of the oldest members of ancient cetaceans (Archaeoceti) is Pakicetus from the Middle Eocene of Pakistan. This is an animal the size of a wolf, whose skeleton is known only partially. It had functioning legs and lived near the shore. This suggests the animal could still move on land. The long snout had carnivorous dentition. The transition from land to sea dates to about 49 million years ago, with the Ambulocetus ("running whale"), also discovered in Pakistan. It was up to long. The limbs of this archaeocete were leg-like, but it was already fully aquatic, indicating that a switch to a lifestyle independent from land happened extraordinarily quickly. The snout was elongated with overhead nostrils and eyes. The tail was strong and supported movement through water. Ambulocetus probably lived in mangroves in brackish water and fed in the riparian zone as a predator of fish and other vertebrates. Dating from about 45 million years ago are species such as Indocetus, Kutchicetus, Rodhocetus and Andrewsiphius, all of which were adapted to life in water. The hind limbs of these species were regressed and their body shapes resemble modern whales. Protocetidae family member Rodhocetus is considered the first to be fully aquatic. The body was streamlined and delicate with extended hand and foot bones. The merged pelvic lumbar spine was present, making it possible to support the floating movement of the tail. It was likely a good swimmer, but could probably move only clumsily on land, much like a modern seal. Marine animals Since the late Eocene, about 40 million years ago, cetaceans populated the subtropical oceans and no longer emerged on land. An example is the 18 metre long Basilosaurus, sometimes called Zeuglodon. The transition from land to water was completed in about 10 million years. The Wadi Al-Hitan ("Whale Valley") in Egypt contains numerous skeletons of Basilosaurus, as well as other marine vertebrates. External phylogeny Molecular biology, immunology, and fossils show that cetaceans are phylogenetically closely related with the even-toed ungulates (Artiodactyla). Whales' direct lineage began in the early Eocene, around 55.8 million years ago, with early artiodactyls. Most molecular biological evidence suggests that hippos are the closest living relatives. Common anatomical features include similarities in the morphology of the posterior molars, and the bony ring on the temporal bone (bulla) and the involucre, a skull feature that was previously associated only with cetaceans. Since the fossil record suggests that the morphologically distinct hippo lineage dates back only about 15 million years, Cetacea and hippos apparently diverged from a common ancestor that was morphologically distinct from either. The most striking common feature is the talus, a bone in the upper ankle. Early cetaceans, archaeocetes, show double castors, which occur only in even-toed ungulates. Corresponding findings are from Tethys Sea deposits in northern India and Pakistan. The Tethys Sea was a shallow sea between the Asian continent and northward-bound Indian plate. Molecular and morphological evidence suggests that artiodactyls as traditionally defined are paraphyletic with respect to cetaceans. Cetaceans are deeply nested within the artiodactyls; the two groups together form a clade, a natural group with a common ancestor, for which the name Cetartiodactyla is sometimes used. Modern nomenclature divides Artiodactyla (or Cetartiodactyla) into four subordinate taxa: camelids (Tylopoda), pigs and peccaries (Suina), ruminants (Ruminantia), and hippos plus whales (Whippomorpha). The Cetacea's presumed location within Artiodactyla can be represented in the following cladogram: Internal phylogeny Within Cetacea, the two parvorders are baleen whales (Mysticeti) which owe their name to their baleen, and toothed whales (Odontoceti), which have teeth shaped like cones, spades, pegs, or tusks, and can perceive their environment through biosonar. The terms whale and dolphin are informal: Mysticeti: Whales, with four families: Balaenidae (right and bowhead whales), Cetotheriidae (pygmy right whales), Balaenopteridae (rorquals), Eschrichtiidae (grey whales) Odontoceti: Whales: with four families: Monodontidae (belugas and narwhals), Physeteridae (sperm whales), Kogiidae (dwarf and pygmy sperm whales), and Ziphiidae (beaked whales) Dolphins, with five families: Delphinidae (oceanic dolphins), Platanistidae (South Asian river dolphins), Lipotidae (old world river dolphins) Iniidae (new world river dolphins), and Pontoporiidae (La Plata dolphins) Porpoises, with one family: Phocoenidae The term 'great whales' covers those currently regulated by the International Whaling Commission: the Odontoceti families Physeteridae (sperm whales), Ziphiidae (beaked whales), and Kogiidae (pygmy and dwarf sperm whales); and Mysticeti families Balaenidae (right and bowhead whales), Cetotheriidae (pygmy right whales), Eschrichtiidae (grey whales), as well as part of the family Balaenopteridae (minke, Bryde's, sei, blue and fin; not Eden's and Omura's whales). Threats The primary threats to cetaceans come from people, both directly from whaling or drive hunting and indirect threats from fishing and pollution. Whaling Whaling is the practice of hunting whales, mainly baleen and sperm whales. This activity has gone on since the Stone Age. In the Middle Ages, reasons for whaling included their meat, oil usable as fuel and the jawbone, which was used in house construction. At the end of the Middle Ages, early whaling fleets aimed at baleen whales, such as bowheads. In the 16th and 17th centuries, the Dutch fleet had about 300 whaling ships with 18,000 crewmen. In the 18th and 19th centuries, baleen whales especially were hunted for their baleen, which was used as a replacement for wood, or in products requiring strength and flexibility such as corsets and crinoline skirts. In addition, the spermaceti found in the sperm whale was used as a machine lubricant and the ambergris as a material for pharmaceutical and perfume industries. In the second half of the 19th century, the explosive harpoon was invented, leading to a massive increase in the catch size. Large ships were used as "mother" ships for the whale handlers. In the first half of the 20th century, whales were of great importance as a supplier of raw materials. Whales were intensively hunted during this time; in the 1930s, 30,000 whales were killed. This increased to over 40,000 animals per year up to the 1960s, when stocks of large baleen whales collapsed. Most hunted whales are now threatened, with some great whale populations exploited to the brink of extinction. Atlantic and Korean gray whale populations were completely eradicated and the North Atlantic right whale population fell to some 300–600. The blue whale population is estimated to be around 14,000. The first efforts to protect whales came in 1931. Some particularly endangered species, such as the humpback whale (which then numbered about 100 animals), were placed under international protection and the first protected areas were established. In 1946, the International Whaling Commission (IWC) was established, to monitor and secure whale stocks. Whaling of 14 large species for commercial purposes was prohibited worldwide by this organization from 1985 to 2005, though some countries do not honor the prohibition. The stocks of species such as humpback and blue whales have recovered, though they are still threatened. The United States Congress passed the Marine Mammal Protection Act of 1972 sustain the marine mammal population. It prohibits the taking of marine mammals except for several hundred per year taken in Alaska. Japanese whaling ships are allowed to hunt whales of different species for ostensibly scientific purposes. Aboriginal whaling is still permitted. About 1,200 pilot whales were taken in the Faroe Islands in 2017, and about 900 narwhals and 800 belugas per year are taken in Alaska, Canada, Greenland, and Siberia. About 150 minke are taken in Greenland per year, 120 gray whales in Siberia and 50 bowheads in Alaska, as aboriginal whaling, besides the 600 minke taken commercially by Norway, 300 minke and 100 sei taken by Japan and up to 100 fin whales taken by Iceland. Iceland and Norway do not recognize the ban and operate commercial whaling. Norway and Japan are committed to ending the ban. Dolphins and other smaller cetaceans are sometimes hunted in an activity known as dolphin drive hunting. This is accomplished by driving a pod together with boats, usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. Dolphins are hunted this way in several places around the world, including the Solomon Islands, the Faroe Islands, Peru and Japan (the most well-known practitioner). Dolphins are mostly hunted for their meat, though some end up in dolphinaria. Despite the controversy thousands of dolphins are caught in drive hunts each year. Fishing Dolphin pods often reside near large tuna shoals. This is known to fishermen, who look for dolphins to catch tuna. Dolphins are much easier to spot from a distance than tuna, since they regularly breathe. The fishermen pull their nets hundreds of meters wide in a circle around the dolphin groups, in the expectation that they will net a tuna shoal. When the nets are pulled together, the dolphins become entangled under water and drown. Line fisheries in larger rivers are threats to river dolphins. A greater threat than by-catch for small cetaceans is targeted hunting. In Southeast Asia, they are sold as fish-replacement to locals, since the region's edible fish promise higher revenues from exports. In the Mediterranean, small cetaceans are targeted to ease pressure on edible fish. Strandings A stranding is when a cetacean leaves the water to lie on a beach. In some cases, groups of whales strand together. The best known are mass strandings of pilot whales and sperm whales. Stranded cetaceans usually die, because their as much as body weight compresses their lungs or breaks their ribs. Smaller whales can die of heatstroke because of their thermal insulation. The causes are not clear. Possible reasons for mass beachings are: toxic contaminants debilitating parasites (in the respiratory tract, brain or middle ear) infections (bacterial or viral) flight from predators (including humans) social bonds within a group, so that the pod follows a stranded animal disturbance of their magnetic senses by natural anomalies in the Earth's magnetic field injuries noise pollution by shipping traffic, seismic surveys and military sonar experiments Since 2000, whale strandings frequently occurred following military sonar testing. In December 2001, the US Navy admitted partial responsibility for the beaching and the deaths of several marine mammals in March 2000. The coauthor of the interim report stated that animals killed by active sonar of some Navy ships were injured. Generally, underwater noise, which is still on the increase, is increasingly tied to strandings; because it impairs communication and sense of direction. Climate change influences the major wind systems and ocean currents, which also lead to cetacean strandings. Researchers studying strandings on the Tasmanian coast from 1920 to 2002 found that greater strandings occurred at certain time intervals. Years with increased strandings were associated with severe storms, which initiated cold water flows close to the coast. In nutrient-rich, cold water, cetaceans expect large prey animals, so they follow the cold water currents into shallower waters, where the risk is higher for strandings. Whales and dolphins who live in pods may accompany sick or debilitated pod members into shallow water, stranding them at low tide. Environmental hazards Heavy metals, residues of many plant and insect venoms and plastic waste flotsam are not biodegradable. Sometimes, cetaceans consume these hazardous materials, mistaking them for food items. As a result, the animals are more susceptible to disease and have fewer offspring. Damage to the ozone layer reduces plankton reproduction because of its resulting radiation. This shrinks the food supply for many marine animals, but the filter-feeding baleen whales are most impacted. Even the Nekton is, in addition to intensive exploitation, damaged by the radiation. Food supplies are also reduced long-term by ocean acidification due to increased absorption of increased atmospheric carbon dioxide. The CO2 reacts with water to form carbonic acid, which reduces the construction of the calcium carbonate skeletons of food supplies for zooplankton that baleen whales depend on. The military and resource extraction industries operate strong sonar and blasting operations. Marine seismic surveys use loud, low-frequency sound that show what is lying underneath the Earth's surface. Vessel traffic also increases noise in the oceans. Such noise can disrupt cetacean behavior such as their use of biosonar for orientation and communication. Severe instances can panic them, driving them to the surface. This leads to bubbles in blood gases and can cause decompression sickness. Naval exercises with sonar regularly results in fallen cetaceans that wash up with fatal decompression. Sounds can be disruptive at distances of more than . Damage varies across frequency and species. Relationship to humans Research history In Aristotle's time, the fourth century BCE, whales were regarded as fish due to their superficial similarity. Aristotle, however, observed many physiological and anatomical similarities with the terrestrial vertebrates, such as blood (circulation), lungs, uterus and fin anatomy. His detailed descriptions were assimilated by the Romans, but mixed with a more accurate knowledge of the dolphins, as mentioned by Pliny the Elder in his Natural history. In the art of this and subsequent periods, dolphins are portrayed with a high-arched head (typical of porpoises) and a long snout. The harbour porpoise was one of the most accessible species for early cetologists; because it could be seen close to land, inhabiting shallow coastal areas of Europe. Much of the findings that apply to all cetaceans were first discovered in porpoises. One of the first anatomical descriptions of the airways of a harbor porpoise dates from 1671 by John Ray. It nevertheless referred to the porpoise as a fish. In the 10th edition of Systema Naturae (1758), Swedish biologist and taxonomist Carl Linnaeus asserted that cetaceans were mammals and not fish. His groundbreaking binomial system formed the basis of modern whale classification. Culture Cetaceans have played a role in human culture through history. Prehistoric Stone Age petroglyphs, such as those in Roddoy and Reppa (Norway), and the Bangudae Petroglyphs in South Korea, depict them. Whale bones were used for many purposes. In the Neolithic settlement of Skara Brae on Orkney sauce pans were made from whale vertebrae. Antiquity The whale was first mentioned in ancient Greece by Homer. There, it is called Ketos, a term that initially included all large marine animals. From this was derived the Roman word for whale, Cetus. Other names were phálaina (Aristotle, Latin form of ballaena) for the female and, with an ironic characteristic style, musculus (Mouse) for the male. North Sea whales were called Physeter, which was meant for the sperm whale Physter macrocephalus. Whales are described in particular by Aristotle, Pliny and Ambrose. All mention both live birth and suckling. Pliny describes the problems associated with the lungs with spray tubes and Ambrose claimed that large whales would take their young into their mouth to protect them. In the Bible especially, the leviathan plays a role as a sea monster. The essence, which features a giant crocodile or a dragon and a whale, was created according to the Bible by God and should again be destroyed by him. In the Book of Job, the leviathan is described in more detail. In Jonah there is a more recognizable description of a whale alongside the prophet Jonah, who, on his flight from the city of Nineveh, is swallowed by a whale. Dolphins are mentioned far more often than whales. Aristotle discusses the sacred animals of the Greeks in his Historia Animalium and gives details of their role as aquatic animals. The Greeks admired the dolphin as a "king of the aquatic animals" and referred to them erroneously as fish. Its intelligence was apparent both in its ability to escape from fishnets and in its collaboration with fishermen. River dolphins are known from the Ganges and—erroneously—the Nile. In the latter case it was equated with sharks and catfish. Supposedly they attacked even crocodiles. Dolphins appear in Greek mythology. Because of their intelligence, they rescued multiple people from drowning. They were said to love music, probably because of their own song, and in the legends they saved famous musicians, such as Arion of Lesbos from Methymna. Dolphins belong to the domain of Poseidon and led him to his wife Amphitrite. Dolphins are associated with other gods, such as Apollo, Dionysus and Aphrodite. The Greeks paid tribute to both whales and dolphins with their own constellation. The constellation of the Whale (Ketos, lat. Cetus) is located south of the Dolphin (Delphi, lat. Delphinus) north of the zodiac. Ancient art often included dolphin representations, including the Cretan Minoans. Later they appeared on reliefs, gems, lamps, coins, mosaics and gravestones. A particularly popular representation is that of Arion or Taras riding on a dolphin. In early Christian art, the dolphin is a popular motif, at times used as a symbol of Christ. Middle Ages to the 19th century St. Brendan described in his travel story Navigatio Sancti Brendani an encounter with a whale, between the years 565–573. He described how he and his companions entered a treeless island, which turned out to be a giant whale, which he called Jasconicus. He met this whale seven years later and rested on his back. Most descriptions of large whales from this time until the whaling era, beginning in the 17th century, were of beached whales, which resembled no other animal. This was particularly true for the sperm whale, the most frequently stranded in larger groups. Raymond Gilmore documented seventeen sperm whales in the estuary of the Elbe from 1723 to 1959 and thirty-one animals on the coast of Great Britain in 1784. In 1827, a blue whale beached itself off the coast of Ostend. Whales were used as attractions in museums and traveling exhibitions. Whalers from the 17th to 19th centuries depicted whales in drawings and recounted tales of their occupation. Although they knew that whales were harmless giants, they described battles with harpooned animals. These included descriptions of sea monsters, including huge whales, sharks, sea snakes, giant squid and octopuses. Among the first whalers who described their experiences on whaling trips was Captain William Scoresby from Great Britain, who published the book Northern Whale Fishery, describing the hunt for northern baleen whales. This was followed by Thomas Beale, a British surgeon, in his book Some observations on the natural history of the sperm whale in 1835; and Frederick Debell Bennett's The tale of a whale hunt in 1840. Whales were described in narrative literature and paintings, most famously in the novels Moby Dick by Herman Melville and Twenty Thousand Leagues Under the Seas by Jules Verne. Baleen was used to make vessel components such as the bottom of a bucket in the Scottish National Museum. The Norsemen crafted ornamented plates from baleen, sometimes interpreted as ironing boards. In the Canadian Arctic (east coast) in Punuk and Thule culture (1000–1600 C.E.), baleen was used to construct houses in place of wood as roof support for winter houses, with half of the building buried under the ground. The actual roof was probably made of animal skins that were covered with soil and moss. Modern culture In the 20th century, perceptions of cetaceans changed. They transformed from monsters into creatures of wonder, as science revealed them to be intelligent and peaceful animals. Hunting was replaced by whale and dolphin tourism. This change is reflected in films and novels. For example, the protagonist of the series Flipper was a bottle-nose dolphin. The TV series SeaQuest DSV (1993–1996), the movies Free Willy and Star Trek IV: The Voyage Home, and the book series The Hitchhiker's Guide to the Galaxy by Douglas Adams are examples. The study of whale song also produced a popular album, Songs of the Humpback Whale. Captivity Whales and dolphins have been kept in captivity for use in education, research and entertainment since the 19th century. Belugas Beluga whales were the first whales to be kept in captivity. Other species were too rare, too shy or too big. The first was shown at Barnum's Museum in New York City in 1861. For most of the 20th century, Canada was the predominant source. They were taken from the St. Lawrence River estuary until the late 1960s, after which they were predominantly taken from the Churchill River estuary until capture was banned in 1992. Russia then became the largest provider. Belugas are caught in the Amur Darya delta and their eastern coast and are transported domestically to aquaria or dolphinaria in Moscow, St. Petersburg and Sochi, or exported to countries such as Canada. They have not been domesticated. As of 2006, 30 belugas lived in Canada and 28 in the United States. 42 deaths in captivity had been reported. A single specimen can reportedly fetch up to US$100,000 (£64,160). The beluga's popularity is due to its unique color and its facial expressions. The latter is possible because while most cetacean "smiles" are fixed, the extra movement afforded by the beluga's unfused cervical vertebrae allows a greater range of apparent expression. Orcas The orca's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, fifty-five whales were taken from the wild in Iceland, nineteen from Japan and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s and by 1999, about 40% of the forty-eight animals on display in the world were captive-born. Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of captive males. Captives have reduced life expectancy, on average only living into their 20s, although some live longer, including several over 30 years old and two, Corky II and Lolita, in their mid-40s. In the wild, females who survive infancy live 46 years on average and up to 70–80 years. Wild males who survive infancy live 31 years on average and can reach 50–60 years. Captivity usually bears little resemblance to wild habitat and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild orca behavior. Wild orca may travel up to in a day and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress. Orcas are well known for their performances in shows, but the number of orcas kept in captivity is small, especially when compared to the number of bottlenose dolphins, with only forty-four captive orcas being held in aquaria as of 2012. Each country has its own tank requirements; in the US, the minimum enclosure size is set by the Code of Federal Regulations, 9 CFR E § 3.104, under the Specifications for the Humane Handling, Care, Treatment and Transportation of Marine Mammals. Aggression among captive orcas is common. They attack each other and their trainers as well. In 2013, SeaWorld's treatment of orcas in captivity was the basis of the movie Blackfish, which documents the history of Tilikum, an orca at SeaWorld Orlando, who had been involved in the deaths of three people. The film led to proposals by some lawmakers to ban captivity of cetaceans, and led SeaWorld to announce in 2016 that it would phase out its orca program after various unsuccessful attempts to restore its revenues, reputation, and stock price. Others Dolphins and porpoises are kept in captivity. Bottlenose dolphins are the most common, as they are relatively easy to train, have a long lifespan in captivity and have a friendly appearance. Bottlenose dolphins live in captivity across the world, though exact numbers are hard to determine. Other species kept in captivity are spotted dolphins, false killer whales and common dolphins, Commerson's dolphins, as well as rough-toothed dolphins, but all in much lower numbers. There are also fewer than ten pilot whales, Amazon river dolphins, Risso's dolphins, spinner dolphins, or tucuxi in captivity. Two unusual and rare hybrid dolphins, known as wolphins, are kept at Sea Life Park in Hawaii, which is a cross between a bottlenose dolphin and a false killer whale. Also, two common/bottlenose hybrids reside in captivity at Discovery Cove and SeaWorld San Diego. In repeated attempts in the 1960s and 1970s, narwhals kept in captivity died within months. A breeding pair of pygmy right whales were retained in a netted area. They were eventually released in South Africa. In 1971, SeaWorld captured a California gray whale calf in Mexico at Scammon's Lagoon. The calf, later named Gigi, was separated from her mother using a form of lasso attached to her flukes. Gigi was displayed at SeaWorld San Diego for a year. She was then released with a radio beacon affixed to her back; however, contact was lost after three weeks. Gigi was the first captive baleen whale. JJ, another gray whale calf, was kept at SeaWorld San Diego. JJ was an orphaned calf that beached itself in April 1997 and was transported two miles to SeaWorld. The calf was a popular attraction and behaved normally, despite separation from his mother. A year later, the then whale though smaller than average, was too big to keep in captivity, and was released on April 1, 1998. A captive Amazon river dolphin housed at Acuario de Valencia is the only trained river dolphin in captivity. Here is a list of all the cetaceans that have been taken into captivity for either conservation, research or human entertainment and education purposes currently or in the past, temporarily or permanently. Atlantic white-sided dolphin Baiji Beluga whale Boto Bottlenose dolphin Commerson's dolphin Common dolphin False killer whale Finless porpoise Gray whale Harbour porpoise Indo-Pacific humpback dolphin Irrawaddy dolphin Long-finned pilot whale Melon-headed whale Minke whale Narwhal Orca Pacific white-sided dolphin Pygmy killer whale Pygmy sperm whale Risso's dolphin Rough-toothed dolphin Short-finned pilot whale South Asian river dolphin Spinner dolphin Spotted dolphin Tucuxi Vaquita Wholphin
Biology and health sciences
Cetaceans
null
7632
https://en.wikipedia.org/wiki/Cerebrospinal%20fluid
Cerebrospinal fluid
Cerebrospinal fluid (CSF) is a clear, colorless body fluid found within the tissue that surrounds the brain and spinal cord of all vertebrates. CSF is produced by specialized ependymal cells in the choroid plexus of the ventricles of the brain, and absorbed in the arachnoid granulations. In humans, there is about 125 mL of CSF at any one time, and about 500 mL is generated every day. CSF acts as a shock absorber, cushion or buffer, providing basic mechanical and immunological protection to the brain inside the skull. CSF also serves a vital function in the cerebral autoregulation of cerebral blood flow. CSF occupies the subarachnoid space (between the arachnoid mater and the pia mater) and the ventricular system around and inside the brain and spinal cord. It fills the ventricles of the brain, cisterns, and sulci, as well as the central canal of the spinal cord. There is also a connection from the subarachnoid space to the bony labyrinth of the inner ear via the perilymphatic duct where the perilymph is continuous with the cerebrospinal fluid. The ependymal cells of the choroid plexus have multiple motile cilia on their apical surfaces that beat to move the CSF through the ventricles. A sample of CSF can be taken from around the spinal cord via lumbar puncture. This can be used to test the intracranial pressure, as well as indicate diseases including infections of the brain or the surrounding meninges. Although noted by Hippocrates, it was forgotten for centuries, though later was described in the 18th century by Emanuel Swedenborg. In 1914, Harvey Cushing demonstrated that CSF is secreted by the choroid plexus. Structure Circulation In humans, there is about 125–150 mL of CSF at any one time. This CSF circulates within the ventricular system of the brain. The ventricles are a series of cavities filled with CSF. The majority of CSF is produced from within the two lateral ventricles. From here, CSF passes through the interventricular foramina to the third ventricle, then the cerebral aqueduct to the fourth ventricle. From the fourth ventricle, the fluid passes into the subarachnoid space through four openingsthe central canal of the spinal cord, the median aperture, and the two lateral apertures. CSF is present within the subarachnoid space, which covers the brain and spinal cord, and stretches below the end of the spinal cord to the sacrum. There is a connection from the subarachnoid space to the bony labyrinth of the inner ear making the cerebrospinal fluid continuous with the perilymph in 93% of people. CSF moves in a single outward direction from the ventricles, but multidirectionally in the subarachnoid space. The flow of cerebrospinal fluid is pulsatile, driven by the cardiac cycle. The flow of CSF through perivascular spaces in the brain (surrounding the cerebral arteries) is obtained through the pumping movements of the walls of the arteries. Contents CSF is derived from blood plasma and is largely similar to it, except that CSF is nearly protein-free compared with plasma and has some different electrolyte levels. Due to the way it is produced, CSF has a lower chloride level than plasma, and a higher sodium level. CSF contains approximately 0.59% plasma proteins, or approximately 15 to 40 mg/dL, depending on sampling site. In general, globular proteins and albumin are in lower concentration in ventricular CSF compared to lumbar or cisternal fluid. This continuous flow into the venous system dilutes the concentration of larger, lipid-insoluble molecules penetrating the brain and CSF. CSF is normally free of red blood cells and at most contains fewer than 5 white blood cells per mm3 (if the white cell count is higher than this it constitutes pleocytosis and can indicate inflammation or infection). Development At around the fifth week of its development, the embryo is a three-layered disc, covered with ectoderm, mesoderm and endoderm. A tube-like formation develops in the midline, called the notochord. The notochord releases extracellular molecules that affect the transformation of the overlying ectoderm into nervous tissue. The neural tube, forming from the ectoderm, contains CSF prior to the development of the choroid plexuses. The open neuropores of the neural tube close after the first month of development, and CSF pressure gradually increases. By the fourth week of embryonic development the brain has begun to develop. Three swellings (primary brain vesicles), have formed within the embryo around the canal, near to where the head will develop. These swellings represent different components of the central nervous system: the prosencephalon (forebrain), mesencephalon (midbrain), and rhombencephalon (hindbrain). Subarachnoid spaces are first evident around the 32nd day of development near the rhombencephalon; circulation is visible from the 41st day. At this time, the first choroid plexus can be seen, found in the fourth ventricle, although the time at which they first secrete CSF is not yet known. The developing forebrain surrounds the neural cord. As the forebrain develops, the neural cord within it becomes a ventricle, ultimately forming the lateral ventricles. Along the inner surface of both ventricles, the ventricular wall remains thin, and a choroid plexus develops, producing and releasing CSF. CSF quickly fills the neural canal. Arachnoid villi are formed around the 35th week of development, with arachnoid granulations noted around the 39th, and continuing developing until 18 months of age. The subcommissural organ secretes SCO-spondin, which forms Reissner's fiber within CSF assisting movement through the cerebral aqueduct. It is present in early intrauterine life but disappears during early development. Physiology Function CSF serves several purposes: Buoyancy: The actual mass of the human brain is about 1400–1500 grams, but its net weight suspended in CSF is equivalent to a mass of 25–50 g. The brain therefore exists in neutral buoyancy, which allows the brain to maintain its density without being impaired by its own weight, which would cut off blood supply and kill neurons in the lower sections without CSF. Protection: CSF protects the brain tissue from injury when jolted or hit, by providing a fluid buffer that acts as a shock absorber from some forms of mechanical injury. Prevention of brain ischemia: The prevention of brain ischemia is aided by decreasing the amount of CSF in the limited space inside the skull. This decreases total intracranial pressure and facilitates blood perfusion. Regulation: CSF allows for the homeostatic regulation of the distribution of substances between cells of the brain, and neuroendocrine factors, to which slight changes can cause problems or damage to the nervous system. For example, high glycine concentration disrupts temperature and blood pressure control, and high CSF pH causes dizziness and fainting. Clearing waste: CSF allows for the removal of waste products from the brain, and is critical in the brain's lymphatic system, called the glymphatic system. Metabolic waste products diffuse rapidly into CSF and are removed into the bloodstream as CSF is absorbed. When this goes awry, CSF can become toxic, such as in amyotrophic lateral sclerosis, the most common form of motor neuron disease. Production The brain produces roughly 500 mL of cerebrospinal fluid per day at a rate of about 20 mL an hour. This transcellular fluid is constantly reabsorbed, so that only 125–150 mL is present at any one time. CSF volume is higher on a mL per kg body weight basis in children compared to adults. Infants have a CSF volume of 4 mL/kg, children have a CSF volume of 3 mL/kg, and adults have a CSF volume of 1.5–2 mL/kg. A high CSF volume is why a larger dose of local anesthetic, on a mL/kg basis, is needed in infants. Additionally, the larger CSF volume may be one reason as to why children have lower rates of postdural puncture headache. Most (about two-thirds to 80%) of CSF is produced by the choroid plexus. The choroid plexus is a network of blood vessels present within sections of the four ventricles of the brain. It is present throughout the ventricular system except for the cerebral aqueduct, and the frontal and occipital horns of the lateral ventricles. CSF is mostly produced by the lateral ventricles. CSF is also produced by the single layer of column-shaped ependymal cells which line the ventricles; by the lining surrounding the subarachnoid space; and a small amount directly from the tiny spaces surrounding blood vessels around the brain. CSF is produced by the choroid plexus in two steps. Firstly, a filtered form of plasma moves from fenestrated capillaries in the choroid plexus into an interstitial space, with movement guided by a difference in pressure between the blood in the capillaries and the interstitial fluid. This fluid then needs to pass through the epithelium cells lining the choroid plexus into the ventricles, an active process requiring the transport of sodium, potassium and chloride that draws water into CSF by creating osmotic pressure. Unlike blood passing from the capillaries into the choroid plexus, the epithelial cells lining the choroid plexus contain tight junctions between cells, which act to prevent most substances flowing freely into CSF. Cilia on the apical surfaces of the ependymal cells beat to help transport the CSF. Water and carbon dioxide from the interstitial fluid diffuse into the epithelial cells. Within these cells, carbonic anhydrase converts the substances into bicarbonate and hydrogen ions. These are exchanged for sodium and chloride on the cell surface facing the interstitium. Sodium, chloride, bicarbonate and potassium are then actively secreted into the ventricular lumen. This creates osmotic pressure and draws water into CSF, facilitated by aquaporins. CSF contains many fewer protein anions than blood plasma. Protein in the blood is primarily composed of anions where each anion has many negative charges on it. As a result, to maintain electroneutrality blood plasma has a much lower concentration of chloride anions than sodium cations. CSF contains a similar concentration of sodium ions to blood plasma but fewer protein cations and therefore a smaller imbalance between sodium and chloride resulting in a higher concentration of chloride ions than plasma. This creates an osmotic pressure difference with the plasma. CSF has less potassium, calcium, glucose and protein. Choroid plexuses also secrete growth factors, iodine, vitamins B1, B12, C, folate, beta-2 microglobulin, arginine vasopressin and nitric oxide into CSF. A Na-K-Cl cotransporter and Na/K ATPase found on the surface of the choroid endothelium, appears to play a role in regulating CSF secretion and composition. It has been hypothesised that CSF is not primarily produced by the choroid plexus, but is being permanently produced inside the entire CSF system, as a consequence of water filtration through the capillary walls into the interstitial fluid of the surrounding brain tissue, regulated by AQP-4. There are circadian variations in CSF secretion, with the mechanisms not fully understood, but potentially relating to differences in the activation of the autonomic nervous system over the course of the day. Choroid plexus of the lateral ventricle produces CSF from the arterial blood provided by the anterior choroidal artery. In the fourth ventricle, CSF is produced from the arterial blood from the anterior inferior cerebellar artery (cerebellopontine angle and the adjacent part of the lateral recess), the posterior inferior cerebellar artery (roof and median opening), and the superior cerebellar artery. Reabsorption CSF returns to the vascular system by entering the dural venous sinuses via arachnoid granulations. These are outpouchings of the arachnoid mater into the venous sinuses around the brain, with valves to ensure one-way drainage. This occurs because of a pressure difference between the arachnoid mater and venous sinuses. CSF has also been seen to drain into lymphatic vessels, particularly those surrounding the nose via drainage along the olfactory nerve through the cribriform plate. The pathway and extent are currently not known, but may involve CSF flow along some cranial nerves and be more prominent in the neonate. CSF turns over at a rate of three to four times a day. CSF has also been seen to be reabsorbed through the sheathes of cranial and spinal nerve sheathes, and through the ependyma. Regulation The composition and rate of CSF generation are influenced by hormones and the content and pressure of blood and CSF. For example, when CSF pressure is higher, there is less of a pressure difference between the capillary blood in choroid plexuses and CSF, decreasing the rate at which fluids move into the choroid plexus and CSF generation. The autonomic nervous system influences choroid plexus CSF secretion, with activation of the sympathetic nervous system decreasing secretion and the parasympathetic nervous system increasing it. Changes in the pH of the blood can affect the activity of carbonic anhydrase, and some drugs (such as furosemide, acting on the Na-K-Cl cotransporter) have the potential to impact membrane channels. Clinical significance Pressure CSF pressure, as measured by lumbar puncture, is 10–18 cmH2O (8–15 mmHg or 1.1–2 kPa) with the patient lying on the side and 20–30 cmH2O (16–24 mmHg or 2.1–3.2 kPa) with the patient sitting up. In newborns, CSF pressure ranges from 8 to 10 cmH2O (4.4–7.3 mmHg or 0.78–0.98 kPa). Most variations are due to coughing or internal compression of jugular veins in the neck. When lying down, the CSF pressure as estimated by lumbar puncture is similar to the intracranial pressure. Hydrocephalus is an abnormal accumulation of CSF in the ventricles of the brain. Hydrocephalus can occur because of obstruction of the passage of CSF, such as from an infection, injury, mass, or congenital abnormality. Hydrocephalus without obstruction associated with normal CSF pressure may also occur. Symptoms can include problems with gait and coordination, urinary incontinence, nausea and vomiting, and progressively impaired cognition. In infants, hydrocephalus can cause an enlarged head, as the bones of the skull have not yet fused, seizures, irritability and drowsiness. A CT scan or MRI scan may reveal enlargement of one or both lateral ventricles, or causative masses or lesions, and lumbar puncture may be used to demonstrate and in some circumstances relieve high intracranial pressure. Hydrocephalus is usually treated through the insertion of a shunt, such as a ventriculo-peritoneal shunt, which diverts fluid to another part of the body. Idiopathic intracranial hypertension is a condition of unknown cause characterized by a rise in CSF pressure. It is associated with headaches, double vision, difficulties seeing, and a swollen optic disc. It can occur in association with the use of vitamin A and tetracycline antibiotics, or without any identifiable cause at all, particularly in younger obese women. Management may include ceasing any known causes, a carbonic anhydrase inhibitor such as acetazolamide, repeated drainage via lumbar puncture, or the insertion of a shunt such as a ventriculo-peritoneal shunt. CSF leak CSF can leak from the dura as a result of different causes such as physical trauma or a lumbar puncture, or from no known cause when it is termed a spontaneous cerebrospinal fluid leak. It is usually associated with intracranial hypotension: low CSF pressure. It can cause headaches, made worse by standing, moving and coughing, as the low CSF pressure causes the brain to "sag" downwards and put pressure on its lower structures. If a leak is identified, a beta-2 transferrin test of the leaking fluid, when positive, is highly specific and sensitive for the detection for CSF leakage. Medical imaging such as CT scans and MRI scans can be used to investigate for a presumed CSF leak when no obvious leak is found but low CSF pressure is identified. Caffeine, given either orally or intravenously, often offers symptomatic relief. Treatment of an identified leak may include injection of a person's blood into the epidural space (an epidural blood patch), spinal surgery, or fibrin glue. Lumbar puncture CSF can be tested for the diagnosis of a variety of neurological diseases, usually obtained by a procedure called lumbar puncture. Lumbar puncture is carried out under sterile conditions by inserting a needle into the subarachnoid space, usually between the third and fourth lumbar vertebrae. CSF is extracted through the needle, and tested. About one third of people experience a headache after lumbar puncture, and pain or discomfort at the needle entry site is common. Rarer complications may include bruising, meningitis or ongoing post lumbar-puncture leakage of CSF. Testing often includes observing the colour of the fluid, measuring CSF pressure, and counting and identifying white and red blood cells within the fluid; measuring protein and glucose levels; and culturing the fluid. The presence of red blood cells and xanthochromia may indicate subarachnoid hemorrhage; whereas central nervous system infections such as meningitis, may be indicated by elevated white blood cell levels. A CSF culture may yield the microorganism that has caused the infection, or PCR may be used to identify a viral cause. Investigations to the total type and nature of proteins reveal point to specific diseases, including multiple sclerosis, paraneoplastic syndromes, systemic lupus erythematosus, neurosarcoidosis, cerebral angiitis; and specific antibodies such as aquaporin-4 may be tested for to assist in the diagnosis of autoimmune conditions. A lumbar puncture that drains CSF may also be used as part of treatment for some conditions, including idiopathic intracranial hypertension and normal pressure hydrocephalus. Lumbar puncture can also be performed to measure the intracranial pressure, which might be increased in certain types of hydrocephalus. However, a lumbar puncture should never be performed if increased intracranial pressure is suspected due to certain situations such as a tumour, because it can lead to fatal brain herniation. Anaesthesia and chemotherapy Some anaesthetics and chemotherapy are injected intrathecally into the subarachnoid space, where they spread around CSF, meaning substances that cannot cross the blood–brain barrier can still be active throughout the central nervous system. Baricity refers to the density of a substance compared to the density of human cerebrospinal fluid and is used in regional anesthesia to determine the manner in which a particular drug will spread in the intrathecal space. Liquorpheresis Liquorpheresis is the process of filtering the CSF in order to clear it from endogen or exogen pathogens. It can be achieved by means of fully implantable or extracorporeal devices, though the technique remains experimental today. CSF drug delivery CSF drug delivery refers to a number of methods designed to administer therapeutic agents directly into the CSF, bypassing the BBB to achieve higher drug concentrations in the CNS. This technique is particularly beneficial for treating neurological disorders such as brain tumors, infections, and neurodegenerative diseases. Intrathecal injection, where drugs are injected directly into the CSF via the lumbar region, and intracerebroventricular injection, targeting the brain's ventricles, are common approaches. These methods ensure that drugs can reach the CNS more effectively than systemic administration, potentially improving therapeutic outcomes and reducing systemic side effects. Advances in this field are driven by ongoing research into novel delivery systems and drug formulations, enhancing the precision and efficacy of treatments. Intrathecal pseudodelivery refers to a particular drug delivery method where the therapeutic agent is introduced into a reservoir connected to the intrathecal space, rather than being released into the CSF and distributed throughout the CNS. In this approach, the drug interacts with its target within the reservoir, allowing for changing the composition of the CSF without systemic release. This method can be advantageous for maximizing efficacy and minimizing systemic side effects. History Various comments by ancient physicians have been read as referring to CSF. Hippocrates discussed "water" surrounding the brain when describing congenital hydrocephalus, and Galen referred to "excremental liquid" in the ventricles of the brain, which he believed was purged into the nose. But for some 16 intervening centuries of ongoing anatomical study, CSF remained unmentioned in the literature. This is perhaps because of the prevailing autopsy technique, which involved cutting off the head, thereby removing evidence of CSF before the brain was examined. The modern rediscovery of CSF is credited to Emanuel Swedenborg. In a manuscript written between 1741 and 1744, unpublished in his lifetime, Swedenborg referred to CSF as "spirituous lymph" secreted from the roof of the fourth ventricle down to the medulla oblongata and spinal cord. This manuscript was eventually published in translation in 1887. Albrecht von Haller, a Swiss physician and physiologist, made note in his 1747 book on physiology that the "water" in the brain was secreted into the ventricles and absorbed in the veins, and when secreted in excess, could lead to hydrocephalus. François Magendie studied the properties of CSF by vivisection. He discovered the foramen Magendie, the opening in the roof of the fourth ventricle, but mistakenly believed that CSF was secreted by the pia mater. Thomas Willis (noted as the discoverer of the circle of Willis) made note of the fact that the consistency of CSF is altered in meningitis. In 1869 Gustav Schwalbe proposed that CSF drainage could occur via lymphatic vessels. In 1891, W. Essex Wynter began treating tubercular meningitis by removing CSF from the subarachnoid space, and Heinrich Quincke began to popularize lumbar puncture, which he advocated for both diagnostic and therapeutic purposes. In 1912, a neurologist William Mestrezat gave the first accurate description of the chemical composition of CSF. In 1914, Harvey W. Cushing published conclusive evidence that CSF is secreted by the choroid plexus. Other animals During phylogenesis, CSF is present within the neuraxis before it circulates. The CSF of Teleostei fish, which do not have a subarachnoid space, is contained within the ventricles of their brains. In mammals, where a subarachnoid space is present, CSF is present in it. Absorption of CSF is seen in amniotes and more complex species, and as species become progressively more complex, the system of absorption becomes progressively more enhanced, and the role of spinal epidural veins in absorption plays a progressively smaller and smaller role. The amount of cerebrospinal fluid varies by size and species. In humans and other mammals, cerebrospinal fluid turns over at a rate of 3–5 times a day. Problems with CSF circulation, leading to hydrocephalus, can occur in other animals as well as humans.
Biology and health sciences
Nervous system
Biology
7669
https://en.wikipedia.org/wiki/Centimetre
Centimetre
A centimetre or centimeter (US/Philippine spelling), with SI symbol cm, is a unit of length in the International System of Units (SI) equal to one hundredth of a metre, centi being the SI prefix for a factor of . Equivalently, there are 100 centimetres in 1 metre. The centimetre was the base unit of length in the now deprecated centimetre–gram–second (CGS) system of units. Though for many physical quantities, SI prefixes for factors of 103—like milli- and kilo-—are often preferred by technicians, the centimetre remains a practical unit of length for many everyday measurements; for instance, human height is commonly measured in centimetres. A centimetre is approximately the width of the fingernail of an average adult person. Equivalence to other units of length One millilitre is defined as one cubic centimetre, under the SI system of units. Other uses In addition to its use in the measurement of length, the centimetre is used: sometimes, to report the level of rainfall as measured by a rain gauge in the CGS system, the centimetre is used to measure capacitance, where 1 cm of capacitance = farads in maps, centimetres are used to make conversions from map scale to real world scale (kilometres) to represent second moment of areas (cm4) as the inverse of the Kayser, a CGS unit, and thus a non-SI metric unit of wavenumber: 1 kayser = 1 wave per centimetre; or, more generally, (wavenumber in kaysers) = 1/(wavelength in centimetres). The SI unit of wavenumber is the inverse metre, m−1. Unicode symbols For the purposes of compatibility with Chinese, Japanese and Korean (CJK) characters, Unicode has symbols for: centimetre – square centimetre – cubic centimetre – These characters are each equal in size to one Chinese character and are typically used only with East Asian, fixed-width CJK fonts.
Physical sciences
Metric
Basics and measurement
7674
https://en.wikipedia.org/wiki/Cable%20car%20%28railway%29
Cable car (railway)
A cable car (usually known as a cable tram outside North America) is a type of cable railway used for mass transit in which rail cars are hauled by a continuously moving cable running at a constant speed. Individual cars stop and start by releasing and gripping this cable as required. Cable cars are distinct from funiculars, where the cars are permanently attached to the cable. History The first cable-operated railway to use a moving rope that could be picked up or released by a grip on the cars was the Fawdon Wagonway, a colliery railway line that opened in 1826. Another began operation in 1840: the London and Blackwall Railway, which hauled passengers in east London, England. The rope available at the time proved too susceptible to wear and the system was abandoned in favour of steam locomotives after eight years. In America, the first cable car installation in operation probably was the West Side and Yonkers Patent Railway, New York City's first-ever elevated railway, which ran from 1 July 1868 to 1870. The collar-equipped cables and claw-equipped cars proving cumbersome, and the line was closed and rebuilt to operate with steam locomotives. In 1869, P. G. T. Beauregard demonstrated a cable car at New Orleans and was issued . In 1873, the Clay Street Hill Railroad, which later became part of the San Francisco cable car system, was first tested. Promoted by Andrew Smith Hallidie with design work by William Eppelsheimer, the line's grips became the model for other cable car transit systems, whose cars were often known as the Hallidie Cable Car. In 1881, the first such system opened outside San Francisco: the Dunedin cable tramway system in Dunedin, New Zealand. For Dunedin, George Smith Duncan further developed the Hallidie model, introducing the pull curve and the slot brake; the former was a way to pull cars through a curve, since Dunedin's curves were too sharp to allow coasting, while the latter forced a wedge down into the cable slot to stop the car. Both of these innovations were generally adopted by other cities, including San Francisco. In Australia: the Melbourne cable tramway system operated from 1885 to 1940 and was one of the most extensive in the world with 1200 trams and trailers operating over 15 routes with 103 km (64 miles) of track; while Sydney had two cable tram routes - Milsons Point to North Sydney (1886-1905) and King Street Wharf to Edgecliff (1894-1905). Cable cars rapidly spread to other cities, although the major attraction for most was the ability to displace horsecar (or mule-drawn) systems rather than the ability to climb hills. Many people at the time viewed horse-drawn transit as unnecessarily cruel, and the fact that a typical horse could work only four or five hours per day necessitated the maintenance of large stables of draft animals that had to be fed, housed, groomed, medicated and rested. Thus, for a period, economics worked in favour of cable cars even in relatively flat cities. For example, the Chicago City Railway, also designed by Eppelsheimer, opened in Chicago in 1882 and went on to become the largest and most profitable cable car system. As with many cities, the problem in flat Chicago was not one of incline, but of transportation capacity. This caused a different approach to the combination of grip car and trailer. Rather than using a grip car and single trailer, as many cities did, or combining the grip and trailer into a single car, like San Francisco's California Cars, Chicago used grip cars to pull trains of up to three trailers. In 1883 the New York and Brooklyn Bridge Railway was opened, which had a most curious feature: though it was a cable car system, it used steam locomotives to get the cars into and out of the terminals. After 1896 the system was changed to one on which a motor car was added to each train to maneuver at the terminals, while en route, the trains were still propelled by the cable. On 25 September 1883, a test of a cable car system was held by Liverpool Tramways Company in Kirkdale, Liverpool. This would have been the first cable car system in Europe, but the company decided against implementing it. Instead, the distinction went to the 1884 Highgate Hill Cable Tramway, a route from Archway to Highgate, north London, which used a continuous cable and grip system on the 1 in 11 (9%) climb of Highgate Hill. The installation was not reliable and was replaced by electric traction in 1909. Other cable car systems were implemented in Europe, though, among which was the Glasgow District Subway, the first underground cable car system, in 1896. (London, England's first deep-level tube railway, the City & South London Railway, had earlier also been built for cable haulage but had been converted to electric traction before opening in 1890.) A few more cable car systems were built in the United Kingdom, Portugal, and France. European cities, having many more curves in their streets, were ultimately less suitable for cable cars than American cities. Though some new cable car systems were still being built, by 1890 the cheaper to construct and simpler to operate electrically-powered trolley or tram started to become the norm, and eventually started to replace existing cable car systems. For a while hybrid cable/electric systems operated, for example in Chicago where electric cars had to be pulled by grip cars through the loop area, due to the lack of trolley wires there. Eventually, San Francisco became the only street-running manually operated system to surviveDunedin, the second city with such cars, was also the second-last city to operate them, closing down in 1957. Recent revival In the last decades of the 20th-century and the early 21st-century, cable traction in general has seen a limited revival as automatic people movers, used in resort areas, airports (for example, Terminal Link at Toronto Pearson International Airport opening in 2006 and Oakland Airport Connector at Oakland International Airport, San Francisco), huge hospital centers and some urban settings. While many of these systems involve cars permanently attached to the cable, the Minimetro system from Poma/Leitner Group and the Cable Liner system from DCC Doppelmayr Cable Car both have variants that allow the cars to be automatically decoupled from the cable under computer control, and can thus be considered a modern interpretation of the cable car. Operation The cable is itself powered by a stationary engine or motor situated in a cable house or power house. The speed at which it moves is relatively constant depending on the number of units gripping the cable at any given time. The cable car begins moving when a clamping device attached to the car, called a grip, applies pressure to ("grip") the moving cable. Conversely, the car is stopped by releasing pressure on the cable (with or without completely detaching) and applying the brakes. This gripping and releasing action may be manual, as was the case in all early cable car systems, or automatic, as is the case in some recent cable operated people mover type systems. Gripping must be applied evenly and gradually in order to avoid bringing the car to cable speed too quickly and unacceptably jarring passengers. In the case of manual systems, the grip resembles a very large pair of pliers, and considerable strength and skill are required to operate the car. As many early cable car operators discovered the hard way, if the grip is not applied properly, it can damage the cable, or even worse, become entangled in the cable. In the latter case, the cable car may not be able to stop and can wreak havoc along its route until the cable house realizes the mishap and halts the cable. One apparent advantage of the cable car is its relative energy efficiency. This is due to the economy of centrally located power stations, and the ability of descending cars to transfer energy to ascending cars. However, this advantage is totally negated by the relatively large energy consumption required to simply move the cable over and under the numerous guide rollers and around the many sheaves. Approximately 95% of the tractive effort in the San Francisco system is expended in simply moving the four cables at . Electric cars with regenerative braking do offer the advantages, without the problem of moving a cable. In the case of steep grades, however, cable traction has the major advantage of not depending on adhesion between wheels and rails. There is also the advantage that keeping the car gripped to the cable will also limit the downhill speed of the car to that of the cable. Because of the constant and relatively low speed, a cable car's potential to cause harm in an accident can be underestimated. Even with a cable car traveling at only , the mass of the cable car and the combined strength and speed of the cable can cause extensive damage in a collision. Relation to funiculars A cable car is superficially similar to a funicular, but differs from such a system in that its cars are not permanently attached to the cable and can stop independently, whereas a funicular has cars that are permanently attached to the propulsion cable, which is itself stopped and started. A cable car cannot climb as steep a grade as a funicular, but many more cars can be operated with a single cable, making it more flexible, and allowing a higher capacity. During the rush hour on San Francisco's Market Street Railway in 1883, a car would leave the terminal every 15 seconds. A few funicular railways operate in street traffic, and because of this operation are often incorrectly described as cable cars. Examples of such operation, and the consequent confusion, are: The Great Orme Tramway in Llandudno, Wales. Several street funiculars in Lisbon, Portugal. Even more confusingly, a hybrid cable car/funicular line once existed in the form of the original Wellington Cable Car, in the New Zealand city of Wellington. This line had both a continuous loop haulage cable that the cars gripped using a cable car gripper, and a balance cable permanently attached to both cars over an undriven pulley at the top of the line. The descending car gripped the haulage cable and was pulled downhill, in turn pulling the ascending car (which remained ungripped) uphill by the balance cable. This line was rebuilt in 1979 and is now a standard funicular, although it retains its old cable car name. List of cable car systems Cities currently operating cable cars Traditional cable car systems The only known existing traditional cable car system is the San Francisco cable car system in the city of San Francisco, California. San Francisco's cable cars constitute the oldest and largest such system in permanent operation, and it is one of the few still functioning in the traditional manner, with manually operated cars running in street traffic. Other examples of cable powered street running systems can be found on the Great Orme in North Wales, and in Lisbon in Portugal. Both of these, however, are funiculars. Modern cable car systems Several cities operate a modern version of the cable car system. These systems are fully automated and run on their own reserved right of way. They are commonly referred to as people movers, although that term is also applied to systems with other forms of propulsion, including funicular style cable propulsion. These cities include: Oakland, California, United States – The Oakland Airport Connector system between the BART rapid transit system and Oakland International Airport, based on Doppelmayr Cable Car's Cable Liner Pinched Loop Perugia, Italy – The Perugia People Mover, based on Leitner's MiniMetro Shanghai, China - The Bund Sightseeing Tunnel, based on Soulé's SK Caracas, Venezuela - The Cabletren Bolivariano, based on Doppelmayr Cable Car's Cable Liner Pinched Loop Zürich, Switzerland - The Skymetro connects the Zurich Airport's main Airside Center, Gates A, B and C with its mid-field Gates E, based on OTIS's Otis Hovair Cities previously operating cable cars Australia Melbourne (1885–1940). Main article: Melbourne cable tramway system Sydney (1886–1905). Milsons Point to North Sydney (1886-1905) and King Street Wharf to Edgecliff (1894-1905). France Laon – The Poma 2000 (service ended in 2016) Paris (Tramway funiculaire de Belleville 1873–1935) Lebanon Beirut (Late 1880s until destruction during the Lebanese Civil War) New Zealand Dunedin (1881–1957, the Dunedin cable tramway system) Wellington (1902–1979, the original Wellington Cable Car hybrid system) Philippines Manila (Early 1900s-1930s, the Manila-Malabon railway.) Portugal Lisbon (converted to regular tram lines in the early 20th century: São Sebastião, Estrela, and Graça) United Kingdom Birmingham (City of Birmingham Tramways Company Ltd, 1888–1911, converted to electric traction) Edinburgh (Edinburgh Corporation Tramways, 1899–1923, converted to electric traction) Glasgow (Glasgow Subway, 1896–1935, converted to electric traction) Hastings Liverpool (trial in 1883) London, England (1884–1909, Highgate Hill Cable Tramway connecting Archway with Highgate, the first cable car in regular operation in Europe) Matlock (1893–1927, the Matlock Cable Tramway) Isle of Man Douglas (1896–1929, the Upper Douglas Cable Tramway) United States Baltimore, Maryland (1890–1897) Binghamton, New York (trial in 1885) Brooklyn, New York New York and Brooklyn Bridge Railway Brooklyn Cable Company's Park Avenue Line Brooklyn Heights Railroad's Montague Street Line Butte, Montana (1889–1897) Chicago, Illinois (1882–1906) Chicago City Railway North Chicago Street Railroad West Chicago Street Railroad Cincinnati, Ohio Cleveland, Ohio Denver, Colorado (1886–1900, the Denver Tramway) Grand Rapids, Michigan Hoboken, New Jersey (1886–1892, the North Hudson County Railway's Hoboken Elevated) Kansas City, Missouri (1885–1913), including 9th St Incline (1888–1902), 8th St. Tunnel in use (1887–1956) Los Angeles, California (1885–1889) Second Street Cable Railway, (1886–1902) Temple Street Cable Railway, (1889–1896) Los Angeles Cable Railway New York City West Side and Yonkers Patent Railway's Ninth Avenue Line New York and Brooklyn Bridge Railway Third Avenue Railroad's 125th Street Crosstown Line Third Avenue Railroad's Third Avenue Line Metropolitan Street Railway's Broadway Line Metropolitan Street Railway's Broadway and Columbus Avenue Line Metropolitan Street Railway's Broadway and Lexington Avenue Line IRT Ninth Avenue Line (defunct) Newark, New Jersey (1888–1889) Oakland, California Oakland Cable Railway (1886–1899) Piedmont Cable Company (1890–1898) Omaha, Nebraska Philadelphia, Pennsylvania Pittsburgh, Pennsylvania Portland, Oregon (1890–1904) Providence, Rhode Island (1888–1895) St. Louis, Missouri Saint Paul, Minnesota San Diego, California (1890–1892) Seattle, Washington (1888–1940) Sioux City, Iowa Spokane, Washington (1899–1936) Tacoma, Washington (1891–1938) Tulsa, Oklahoma Washington, D.C. (1890–1899, part of the Washington streetcar system) Wichita, Kansas
Technology
Rail and cable transport
null
7677
https://en.wikipedia.org/wiki/Computer%20monitor
Computer monitor
A computer monitor is an output device that displays information in pictorial or textual form. A discrete monitor comprises a visual display, support electronics, power supply, housing, electrical connectors, and external user controls. The display in modern monitors is typically an LCD with LED backlight, having by the 2010s replaced CCFL backlit LCDs. Before the mid-2000s, most monitors used a cathode-ray tube (CRT) as the image output technology. A monitor is typically connected to its host computer via DisplayPort, HDMI, USB-C, DVI, or VGA. Monitors sometimes use other proprietary connectors and signals to connect to a computer, which is less common. Originally computer monitors were used for data processing while television sets were used for video. From the 1980s onward, computers (and their monitors) have been used for both data processing and video, while televisions have implemented some computer functionality. Since 2010, the typical display aspect ratio of both televisions and computer monitors changed from 4:3 to 16:9 Modern computer monitors are often functionally interchangeable with television sets and vice versa. As most computer monitors do not include integrated speakers, TV tuners, or remote controls, external components such as a DTA box may be needed to use a computer monitor as a TV set. History Early electronic computer front panels were fitted with an array of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation. Computer monitors were formerly known as visual display units (VDU), particularly in British English. This term mostly fell out of use by the 1990s. Technologies Multiple technologies have been used for computer monitors. Until the 21st century most used cathode-ray tubes but they have largely been superseded by LCD monitors. Cathode-ray tube The first computer monitors used cathode-ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the workstation in a single large chassis, typically limiting them to emulation of a paper teletypewriter, thus the early epithet of 'glass TTY'. The display was monochromatic and far less sharp and detailed than on a modern monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for specialized military, industrial and scientific applications but they were far too costly for general use; wider commercial use became possible after the release of a slow, but affordable Tektronix 4010 terminal in 1972. Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a possible feature for a few MOS 6500 series-based machines (such as introduced in 1977 Apple II computer or Atari 2600 console), and the color output was a specialty of the more graphically sophisticated Atari 8-bit computers, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of pixels, or it could produce pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of . By the end of the 1980s color progressive scan CRT monitors were widely available and increasingly affordable, while the sharpest prosumer monitors could clearly display high-definition video, against the backdrop of efforts at HDTV standardization from the 1970s to the 1980s failing continuously, leaving consumer SDTVs to stagnate increasingly far behind the capabilities of computer CRT monitors well into the 2000s. During the following decade, maximum display resolutions gradually increased and prices continued to fall as CRT technology remained dominant in the PC monitor market into the new millennium, partly because it remained cheaper to produce. CRTs still offer color, grayscale, motion, and latency advantages over today's LCDs, but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry. Liquid-crystal display There are multiple technologies that have been used to implement liquid-crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors. The first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors were the Eizo FlexScan L66 in the mid-1990s, the SGI 1600SW, Apple Studio Display and the ViewSonic VP140 in 1998. In 2003, LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The physical advantages of LCD over CRT monitors are that LCDs are lighter, smaller, and consume less power. In terms of performance, LCDs produce less or no flicker, reducing eyestrain, sharper image at native resolution, and better checkerboard contrast. On the other hand, CRT monitors have superior blacks, viewing angles, and response time, can use arbitrary lower resolutions without aliasing, and flicker can be reduced with higher refresh rates, though this flicker can also be used to reduce motion blur compared to less flickery displays such as most LCDs. Many specialized fields such as vision science remain dependent on CRTs, the best LCD monitors having achieved moderate temporal accuracy, and so can be used only if their poor spatial accuracy is unimportant. High dynamic range (HDR) has been implemented into high-end LCD monitors to improve grayscale accuracy. Since around the late 2000s, widescreen LCD monitors have become popular, in part due to television series, motion pictures and video games transitioning to widescreen, which makes squarer monitors unsuited to display them correctly. Organic light-emitting diode Organic light-emitting diode (OLED) monitors provide most of the benefits of both LCD and CRT monitors with few of their drawbacks, though much like plasma panels or very early CRTs they suffer from burn-in, and remain very expensive. Measurements of performance The performance of a monitor is measured by the following parameters: Display geometry: Viewable image size – is usually measured diagonally, but the actual widths and heights are more informative since they are not affected by the aspect ratio in the same way. For CRTs, the viewable size is typically smaller than the tube itself. Aspect ratio – is the ratio of the horizontal length to the vertical length. Monitors usually have the aspect ratio 4:3, 5:4, 16:10 or 16:9. Radius of curvature (for curved monitors) – is the radius that a circle would have if it had the same curvature as the display. This value is typically given in millimeters, but expressed with the letter "R" instead of a unit (for example, a display with "3800R curvature" has a 3800mm radius of curvature. Display resolution is the number of distinct pixels in each dimension that can be displayed natively. For a given display size, maximum resolution is limited by dot pitch or DPI. Dot pitch represents the distance between the primary elements of the display, typically averaged across it in nonuniform displays. A related unit is pixel pitch, In LCDs, pixel pitch is the distance between the center of two adjacent pixels. In CRTs, pixel pitch is defined as the distance between subpixels of the same color. Dot pitch is the reciprocal of pixel density. Pixel density is a measure of how densely packed the pixels on a display are. In LCDs, pixel density is the number of pixels in one linear unit along the display, typically measured in pixels per inch (px/in or ppi). Color characteristics: Luminance – measured in candelas per square meter (cd/m, also called a nit). Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing simultaneously. For example, a ratio of means that the brightest shade (white) is 20,000 times brighter than its darkest shade (black). Dynamic contrast ratio is measured with the LCD backlight turned off. ANSI contrast is with both black and white simultaneously adjacent onscreen. Color depth – measured in bits per primary color or bits for all colors. Those with 10bpc (bits per channel) or more can display more shades of color (approximately 1 billion shades) than traditional 8bpc monitors (approximately 16.8 million shades or colors), and can do so more precisely without having to resort to dithering. Gamut – measured as coordinates in the CIE 1931 color space. The names sRGB or Adobe RGB are shorthand notations. Color accuracy – measured in ΔE (delta-E); the lower the ΔE, the more accurate the color representation. A ΔE of below 1 is imperceptible to the human eye. A ΔE of 24 is considered good and requires a sensitive eye to spot the difference. Viewing angle is the maximum angle at which images on the monitor can be viewed, without subjectively excessive degradation to the image. It is measured in degrees horizontally and vertically. Input speed characteristics: Refresh rate is (in CRTs) the number of times in a second that the display is illuminated (the number of times a second a raster scan is completed). In LCDs it is the number of times the image can be changed per second, expressed in hertz (Hz). Determines the maximum number of frames per second (FPS) a monitor is capable of showing. Maximum refresh rate is limited by response time. Response time is the time a pixel in a monitor takes to change between two shades. The particular shades depend on the test procedure, which differs between manufacturers. In general, lower numbers mean faster transitions and therefore fewer visible image artifacts such as ghosting. Grey to grey (GtG), measured in milliseconds (ms). Input latency is the time it takes for a monitor to display an image after receiving it, typically measured in milliseconds (ms). Power consumption is measured in watts. Size On two-dimensional display devices such as computer monitors the display size or viewable image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the bezel or other aspects of the unit's design. The main measurements for display devices are width, height, total area and the diagonal. The size of a display is usually given by manufacturers diagonally, i.e. as the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube's face (due to the thickness of the glass). This method continued even when cathode-ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size and was not confusing when the aspect ratio was universally 4:3. With the introduction of flat-panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger viewable area than an eighteen-inch cathode-ray tube. Estimation of monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 widescreen display has less area, than a 4:3 screen. The 4:3 screen has dimensions of and an area , while the widescreen is , . Aspect ratio Until about 2003, most computer monitors had a 4:3 aspect ratio and some had 5:4. Between 2003 and 2006, monitors with 16:9 and mostly 16:10 (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition included productive uses (i.e. field of view in video games and movie viewing) such as the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and application menus at the same time. In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops and notebook computers. In 2010, the computer industry started to move over from 16:10 to 16:9 because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture. In 2011, non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung, this was because the "Demand for the old 'Square monitors' has decreased rapidly over the last couple of years," and "I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand." Resolution The resolution for computer monitors has increased over time. From during the late 1970s, to during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is , shared with the 1080p of HDTV. Before 2013 mass market LCD monitors were limited to at , excluding niche professional monitors. By 2015 most major display manufacturers had released (4K UHD) displays, and the first (8K) monitors had begun shipping. Gamut Every RGB monitor has its own color gamut, bounded in chromaticity by a color triangle. Some of these triangles are smaller than the sRGB triangle, some are larger. Colors are typically encoded by 8 bits per primary color. The RGB value [255, 0, 0] represents red, but slightly different colors in different color spaces such as Adobe RGB and sRGB. Displaying sRGB-encoded data on wide-gamut devices can give an unrealistic result. The gamut is a property of the monitor; the image color space can be forwarded as Exif metadata in the picture. As long as the monitor gamut is wider than the color space gamut, correct display is possible, if the monitor is calibrated. A picture that uses colors that are outside the sRGB color space will display on an sRGB color space monitor with limitations. Still today, many monitors that can display the sRGB color space are not factory nor user-calibrated to display it correctly. Color management is needed both in electronic publishing (via the Internet for display in browsers) and in desktop publishing targeted to print. Additional features Universal features Power saving Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life. Some monitors will also switch themselves off after a time period on standby. Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear. Indicator light Most modern monitors have two different indicator light colors wherein if video-input signal was detected, the indicator light is green and when the monitor is in power-saving mode, the screen is black and the indicator light is orange. Some monitors have different indicator light colors and some monitors have a blinking indicator light when in power-saving mode. Integrated accessories Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows interface drivers and other small software which help in proper functioning of these functions. Ultrawide screens Monitors that feature an aspect ratio greater than 2:1 (for instance, 21:9 or 32:9, as opposed to the more common 16:9, which resolves to 1.7:1).Monitors with an aspect ratio greater than 3:1 are marketed as super ultrawide monitors. These are typically massive curved screens intended to replace a multi-monitor deployment. Touch screen These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints. Sensors Ambient light for automatically adjusting screen brightness and/or color temperature Infrared camera for biometrics, eye and/or face recognition. Eye tracking as user input device. As lidar receiver for 3D scanning. Consumer features Glossy screen Some displays, especially newer flat-panel monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are more visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only partly mitigates the problem. Curved designs Most often using nominally flat-panel display technology such as LCD or OLED, a concave rather than convex curve is imparted, reducing geometric distortion, especially in extremely large and wide seamless desktop monitors intended for close viewing range. 3D Newer monitors are able to display a different image for each eye, often with the help of special glasses and polarizers, giving the perception of depth. An autostereoscopic screen can generate 3D images without headgear. Professional features Anti-glare and anti-reflection screens Features for medical using or for outdoor placement. Directional screen Narrow viewing angle screens are used in some security-conscious applications. Integrated professional accessories Integrated screen calibration tools, screen hoods, signal transmitters; Protective screens. Tablet screens A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tool tilt and rotation as well. Touch and tablet sensors are often used on sample and hold displays such as LCDs to substitute for the light pen, which can only work on CRTs. Integrated display LUT and 3D LUT tables The option for using the display as a reference monitor; these calibration features can give an advanced color management control for take a near-perfect image. Local dimming backlight Option for professional LCD monitors, inherent to OLED & CRT; professional feature with mainstream tendency. Backlight brightness/color uniformity compensation Near to mainstream professional feature; advanced hardware driver for backlit modules with local zones of uniformity correction. Mounting Computer monitors are provided with a variety of methods for mounting them depending on the application and environment. Desktop A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a VESA mount. A VESA standard mount allows the monitor to be used with more after-market stands if the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation. VESA mount The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat-panel displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and TVs. For computer monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket. Rack mount Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack: Fixed A fixed rack mount monitor is mounted directly to the rack with the flat-panel or CRT visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch screens. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal screen is the largest size that will fit within the rails of a 19-inch rack. Larger flat-panels may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller screens side by side into one rack mount. Stowable A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage as a drawer. The flat display is visible only when pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD but there are systems providing two or three displays in a single rack mount system. Panel mount A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the screen, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the screen will be sealed to the back of the front panel to prevent water and dirt contamination. Open frame An open frame monitor provides the display and enough supporting structure to hold associated electronics and to minimally support the display. Provision will be made for attaching the unit to some external structure for support and protection. Open frame monitors are intended to be built into some other piece of equipment providing its own case. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount monitor manufacturers will purchase desktop displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame display for inclusion into their product. Security vulnerabilities According to an NSA document leaked to , the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable to allow the NSA to remotely see what is being displayed on the targeted computer monitor. Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. Phreaking more generally is the process of exploiting telephone networks.
Technology
User interface
null
7682
https://en.wikipedia.org/wiki/Centriole
Centriole
In cell biology a centriole is a cylindrical organelle composed mainly of a protein called tubulin. Centrioles are found in most eukaryotic cells, but are not present in conifers (Pinophyta), flowering plants (angiosperms) and most fungi, and are only present in the male gametes of charophytes, bryophytes, seedless vascular plants, cycads, and Ginkgo. A bound pair of centrioles, surrounded by a highly ordered mass of dense material, called the pericentriolar material (PCM), makes up a structure called a centrosome. Centrioles are typically made up of nine sets of short microtubule triplets, arranged in a cylinder. Deviations from this structure include crabs and Drosophila melanogaster embryos, with nine doublets, and Caenorhabditis elegans sperm cells and early embryos, with nine singlets. Additional proteins include centrin, cenexin and tektin. The main function of centrioles is to produce cilia during interphase and the aster and the spindle during cell division. History The centrosome was discovered jointly by Walther Flemming in 1875 and Edouard Van Beneden in 1876. Edouard Van Beneden made the first observation of centrosomes as composed of two orthogonal centrioles in 1883. Theodor Boveri introduced the term "centrosome" in 1888 and the term "centriole" in 1895. The basal body was named by Theodor Wilhelm Engelmann in 1880. The pattern of centriole duplication was first worked out independently by Étienne de Harven and Joseph G. Gall c. 1950. Role in cell division Centrioles are involved in the organization of the mitotic spindle and in the completion of cytokinesis. Centrioles were previously thought to be required for the formation of a mitotic spindle in animal cells. However, more recent experiments have demonstrated that cells whose centrioles have been removed via laser ablation can still progress through the G1 stage of interphase before centrioles can be synthesized later in a de novo fashion. Additionally, mutant flies lacking centrioles develop normally, although the adult flies' cells lack flagella and cilia and as a result, they die shortly after birth. The centrioles can self replicate during cell division. Cellular organization Centrioles are a very important part of centrosomes, which are involved in organizing microtubules in the cytoplasm. The position of the centriole determines the position of the nucleus and plays a crucial role in the spatial arrangement of the cell. Fertility Sperm centrioles are important for 2 functions: (1) to form the sperm flagellum and sperm movement and (2) for the development of the embryo after fertilization. The sperm supplies the centriole that creates the centrosome and microtubule system of the zygote. Ciliogenesis In flagellates and ciliates, the position of the flagellum or cilium is determined by the mother centriole, which becomes the basal body. An inability of cells to use centrioles to make functional flagella and cilia has been linked to a number of genetic and developmental diseases. In particular, the inability of centrioles to properly migrate prior to ciliary assembly has recently been linked to Meckel–Gruber syndrome. Animal development Proper orientation of cilia via centriole positioning toward the posterior of embryonic node cells is critical for establishing left-right asymmetry, during mammalian development. Centriole duplication Before DNA replication, cells contain two centrioles, an older mother centriole, and a younger daughter centriole. During cell division, a new centriole grows at the proximal end of both mother and daughter centrioles. After duplication, the two centriole pairs (the freshly assembled centriole is now a daughter centriole in each pair) will remain attached to each other orthogonally until mitosis. At that point the mother and daughter centrioles separate dependently on an enzyme called separase. The two centrioles in the centrosome are tied to one another. The mother centriole has radiating appendages at the distal end of its long axis and is attached to its daughter at the proximal end. Each daughter cell formed after cell division will inherit one of these pairs. Centrioles start duplicating when DNA replicates. Origin LECA, the last common ancestor of all eukaryotes was a ciliated cell with centrioles. Some lineages of eukaryotes, such as land plants, do not have centrioles except in their motile male gametes. Centrioles are completely absent from all cells of conifers and flowering plants, which do not have ciliate or flagellate gametes. It is unclear if the last common ancestor had one or two cilia. Important genes such as those coding for centrins, required for centriole growth, are only found in eukaryotes, and not in bacteria or archaea. Etymology and pronunciation The word centriole () uses combining forms of centri- and -ole, yielding "little central part", which describes a centriole's typical location near the center of the cell. Atypical centrioles Typical centrioles are made of 9 triplets of microtubules organized with radial symmetry. Centrioles can vary the number of microtubules and can be made of 9 doublets of microtubules (as in Drosophila melanogaster) or 9 singlets of microtubules as in C. elegans. Atypical centrioles are centrioles that do not have microtubules, such as the Proximal Centriole-Like found in D. melanogaster sperm, or that have microtubules with no radial symmetry, such as in the distal centriole of human spermatozoon. Atypical centrioles may have evolved at least eight times independently during vertebrate evolution and may evolve in the sperm after internal fertilization evolves. It wasn't clear why centriole become atypical until recently. The atypical distal centriole forms a dynamic basal complex (DBC) that, together with other structures in the sperm neck, facilitates a cascade of internal sliding, coupling tail beating with head kinking. The atypical distal centriole's properties suggest that it evolved into a transmission system that couples the sperm tail motors to the whole sperm, thereby enhancing sperm function.
Biology and health sciences
Organelles and other cell parts
null
7697
https://en.wikipedia.org/wiki/Lockheed%20C-130%20Hercules
Lockheed C-130 Hercules
The Lockheed C-130 Hercules is an American four-engine turboprop military transport aircraft designed and built by Lockheed (now Lockheed Martin). Capable of using unprepared runways for takeoffs and landings, the C-130 was originally designed as a troop, medevac, and cargo transport aircraft. The versatile airframe has found uses in other roles, including as a gunship (AC-130), for airborne assault, search and rescue, scientific research support, weather reconnaissance, aerial refueling, maritime patrol, and aerial firefighting. It is now the main tactical airlifter for many military forces worldwide. More than 40 variants of the Hercules, including civilian versions marketed as the Lockheed L-100, operate in more than 60 nations. The C-130 entered service with the U.S. in 1956, followed by Australia and many other nations. During its years of service, the Hercules has participated in numerous military, civilian and humanitarian aid operations. In 2007, the transport became the fifth aircraft to mark 50 years of continuous service with its original primary customer, which for the C-130 is the United States Air Force (USAF). The C-130 is the longest continuously produced military aircraft, having achieved 70 years of production in 2024. The updated Lockheed Martin C-130J Super Hercules remains in production . Design and development Background and requirements The Korean War showed that World War II-era piston-engine transports—Fairchild C-119 Flying Boxcars, Douglas C-47 Skytrains and Curtiss C-46 Commandos—were no longer adequate. On 2 February 1951, the United States Air Force issued a General Operating Requirement (GOR) for a new transport to Boeing, Douglas, Fairchild, Lockheed, Martin, Chase Aircraft, North American, Northrop, and Airlifts Inc. The new transport would have a capacity of 92 passengers, 72 combat troops or 64 paratroopers in a cargo compartment that was approximately long, high, and wide. Unlike transports derived from passenger airliners, it was to be designed specifically as a combat transport with loading from a hinged loading ramp at the rear of the fuselage. A notable advance for large aircraft was the introduction of a turboprop powerplant, the Allison T56 which was developed for the C-130. It gave the aircraft greater range than a turbojet engine as it used less fuel. Turboprop engines also produced much more power for their weight than piston engines. However, the turboprop configuration chosen for the T56, with the propeller connected to the compressor, had the potential to cause structural failure of the aircraft if an engine failed. Safety devices had to be incorporated to reduce the excessive drag from a windmilling propeller. Design phase The Hercules resembles a larger, four-engine version of the Fairchild C-123 Provider with a similar wing and cargo ramp layout. The C-123 had evolved from the Chase XCG-20 Avitruc first flown in 1950. The Boeing C-97 Stratofreighter had rear ramps, which made it possible to drive vehicles onto the airplane (also possible with the forward ramp on a C-124). The ramp on the Hercules was also used to airdrop cargo, which included a low-altitude parachute-extraction system for Sheridan tanks and even dropping large improvised "daisy cutter" bombs. The new Lockheed cargo plane had a range of and it could operate from short and unprepared strips. Fairchild, North American, Martin, and Northrop declined to participate. The remaining five companies tendered a total of ten designs: Lockheed two, Boeing one, Chase three, Douglas three, and Airlifts Inc. one. The contest was a close affair between the lighter of the two Lockheed (preliminary project designation L-206) proposals and a four-turboprop Douglas design. The Lockheed design team was led by Willis Hawkins, starting with a 130-page proposal for the Lockheed L-206. Hall Hibbard, Lockheed vice president and chief engineer, saw the proposal and directed it to Kelly Johnson, who did not care for the low-speed, unarmed aircraft, and remarked, "If you sign that letter, you will destroy the Lockheed Company." Both Hibbard and Johnson signed the proposal and the company won the contract for the now-designated Model 82 on 2 July 1951. The first flight of the YC-130 prototype was made on 23 August 1954 from the Lockheed plant in Burbank, California. The aircraft, serial number 53-3397, was the second prototype, but the first of the two to fly. The YC-130 was piloted by Stanley Beltz and Roy Wimmer on its 61-minute flight to Edwards Air Force Base; Jack Real and Dick Stanton served as flight engineers. Kelly Johnson flew chase in a Lockheed P2V Neptune.<ref name="dabney">Dabney, Joseph E. A. "Mating of the Jeep, the Truck, and the Airplane." lockheedmartin.com, 2004. Excerpted from HERK: Hero of the Skies in Lockheed Martin Service News, Lockheed Martin Air Mobility Support Volume 29, Issue 2, p. 3.</ref> After the two prototypes were completed, production began in Marietta, Georgia, where over 2,300 C-130s have been built through 2009. The initial production model, the C-130A, was powered by Allison T56-A-9 turboprops with three-blade propellers and originally equipped with the blunt nose of the prototypes. Deliveries began in December 1956, continuing until the introduction of the C-130B model in 1959. Some A-models were equipped with skis and re-designated C-130D. As the C-130A became operational with Tactical Air Command (TAC), the C-130's lack of range became apparent and additional fuel capacity was added with wing pylon-mounted tanks outboard of the engines; this added of fuel capacity for a total capacity of . Improved versions The C-130B model was developed to complement the A-models that had previously been delivered, and incorporated new features, particularly increased fuel capacity in the form of auxiliary tanks built into the center wing section and an AC electrical system. Four-bladed Hamilton Standard propellers replaced the Aero Products' three-blade propellers that distinguished the earlier A-models. The C-130B had ailerons operated by hydraulic pressure that was increased from , as well as uprated engines and four-blade propellers that were standard until the J-model. The B model was originally intended to have "blown controls", a system that blows high-pressure air over the control surfaces to improve their effectiveness during slow flight. It was tested on an NC-130B prototype aircraft with a pair of T-56 turbines providing high-pressure air through a duct system to the control surfaces and flaps during landing. This greatly reduced landing speed to just 63 knots and cut landing distance in half. The system never entered service because it did not improve takeoff performance by the same margin, making the landing performance pointless if the aircraft could not also take off from where it had landed. An electronic reconnaissance variant of the C-130B was designated C-130B-II. A total of 13 aircraft were converted. The C-130B-II was distinguished by its false external wing fuel tanks, which were disguised signals intelligence (SIGINT) receiver antennas. These pods were slightly larger than the standard wing tanks found on other C-130Bs. Most aircraft featured a swept blade antenna on the upper fuselage, as well as extra wire antennas between the vertical fin and upper fuselage not found on other C-130s. Radio call numbers on the tail of these aircraft were regularly changed to confuse observers and disguise their true mission. The extended-range C-130E model entered service in 1962 after it was developed as an interim long-range transport for the Military Air Transport Service. Essentially a B-model, the new designation was the result of the installation of Sargent Fletcher external fuel tanks under each wing's midsection and more powerful Allison T56-A-7A turboprops. The hydraulic boost pressure to the ailerons was reduced back to as a consequence of the external tanks' weight in the middle of the wingspan. The E model also featured structural improvements, avionics upgrades, and a higher gross weight. Australia took delivery of 12 C130E Hercules during 1966–67 to supplement the 12 C-130A models already in service with the RAAF. Sweden and Spain fly the TP-84T version of the C-130E fitted for aerial refueling capability. The KC-130 tankers, originally C-130F procured for the US Marine Corps (USMC) in 1958 (under the designation GV-1) are equipped with a removable stainless steel fuel tank carried inside the cargo compartment. The two wing-mounted hose and drogue aerial refueling pods each transfer up to to two aircraft simultaneously, allowing for rapid cycle times of multiple-receiver aircraft formations, (a typical tanker formation of four aircraft in less than 30 minutes). The US Navy's C-130G has increased structural strength allowing higher gross weight operation. Further developments The C-130H model has updated Allison T56-A-15 turboprops, a redesigned outer wing, updated avionics, and other minor improvements. Later H models had a new, fatigue-life-improved, center wing that was retrofitted to many earlier H-models. For structural reasons, some models are required to land with reduced amounts of fuel when carrying heavy cargo, reducing usable range. The H model remains in widespread use with the United States Air Force (USAF) and many foreign air forces. Initial deliveries began in 1964 (to the RNZAF), remaining in production until 1996. An improved C-130H was introduced in 1974, with Australia purchasing 12 of the type in 1978 to replace the original 12 C-130A models, which had first entered Royal Australian Air Force (RAAF) service in 1958. The U.S. Coast Guard employs the HC-130H for long-range search and rescue, drug interdiction, illegal migrant patrols, homeland security, and logistics. C-130H models produced from 1992 to 1996 were designated as C-130H3 by the USAF, with the "3" denoting the third variation in design for the H series. Improvements included ring laser gyros for the INUs, GPS receivers, a partial glass cockpit (ADI and HSI instruments), a more capable APN-241 color radar, night vision device compatible instrument lighting, and an integrated radar and missile warning system. The electrical system upgrade included Generator Control Units (GCU) and Bus Switching units (BSU) to provide stable power to the more sensitive upgraded components. The equivalent model for export to the UK is the C-130K, known by the Royal Air Force (RAF) as the Hercules C.1. The C-130H-30 (Hercules C.3 in RAF service) is a stretched version of the original Hercules, achieved by inserting a plug aft of the cockpit and an plug at the rear of the fuselage. A single C-130K was purchased by the Met Office for use by its Meteorological Research Flight, where it was classified as the Hercules W.2. This aircraft was heavily modified, with its most prominent feature being the long red and white striped atmospheric probe on the nose and the move of the weather radar into a pod above the forward fuselage. This aircraft, named Snoopy, was withdrawn in 2001 and was then modified by Marshall of Cambridge Aerospace as a flight testbed for the A400M turbine engine, the TP400. The C-130K is used by the RAF Falcons for parachute drops. Three C-130Ks (Hercules C Mk.1P) were upgraded and sold to the Austrian Air Force in 2002. Enhanced models The MC-130E Combat Talon was developed for the USAF during the Vietnam War to support special operations missions in Southeast Asia, and led to both the MC-130H Combat Talon II as well as a family of other special missions aircraft. 37 of the earliest models currently operating with the Air Force Special Operations Command (AFSOC) are scheduled to be replaced by new-production MC-130J versions. The EC-130 Commando Solo is another special missions variant within AFSOC, albeit operated solely by an AFSOC-gained wing in the Pennsylvania Air National Guard, and is a psychological operations/information operations (PSYOP/IO) platform equipped as an aerial radio station and television stations able to transmit messaging over commercial frequencies. Other versions of the EC-130, most notably the EC-130H Compass Call, are also special variants, but are assigned to the Air Combat Command (ACC). The AC-130 gunship was first developed during the Vietnam War to provide close air support and other ground-attack duties. The HC-130 is a family of long-range search and rescue variants used by the USAF and the U.S. Coast Guard. Equipped for the deep deployment of Pararescuemen (PJs), survival equipment, and (in the case of USAF versions) aerial refueling of combat rescue helicopters, HC-130s are usually the on-scene command aircraft for combat SAR missions (USAF only) and non-combat SAR (USAF and USCG). Early USAF versions were also equipped with the Fulton surface-to-air recovery system, designed to pull a person off the ground using a wire strung from a helium balloon. The John Wayne movie The Green Berets features its use. The Fulton system was later removed when aerial refueling of helicopters proved safer and more versatile. The movie The Perfect Storm depicts a real-life SAR mission involving aerial refueling of a New York Air National Guard HH-60G by a New York Air National Guard HC-130P. The C-130R and C-130T are U.S. Navy and USMC models, both equipped with underwing external fuel tanks. The USN C-130T is similar but has additional avionics improvements. In both models, aircraft are equipped with Allison T56-A-16 engines. The USMC versions are designated KC-130R or KC-130T when equipped with underwing refueling pods and pylons and are fully night vision system compatible. The RC-130 is a reconnaissance version developed during the Cold War. Sometimes called "ferret" aircraft, these planes were initially retrofitted standard C-130s. The Lockheed L-100 (L-382) is a civilian variant, equivalent to a C-130E model without military equipment. The L-100 also has two stretched versions. Next generation In the 1970s, Lockheed proposed a C-130 variant with turbofan engines rather than turboprops, but the U.S. Air Force preferred the takeoff performance of the existing aircraft. In the 1980s, the C-130 was intended to be replaced by the Advanced Medium STOL Transport project. The project was canceled and the C-130 has remained in production. Building on lessons learned, Lockheed Martin modified a commercial variant of the C-130 into a High Technology Test Bed (HTTB). This test aircraft set numerous short takeoff and landing performance records and significantly expanded the database for future derivatives of the C-130. Modifications made to the HTTB included extended chord ailerons, a long chord rudder, fast-acting double-slotted trailing edge flaps, a high-camber wing leading edge extension, a larger dorsal fin and dorsal fins, the addition of three spoiler panels to each wing upper surface, a long-stroke main and nose landing gear system, and changes to the flight controls and a change from direct mechanical linkages assisted by hydraulic boost, to fully powered controls, in which the mechanical linkages from the flight station controls operated only the hydraulic control valves of the appropriate boost unit. The HTTB first flew on 19 June 1984, with civil registration of N130X. After demonstrating many new technologies, some of which were applied to the C-130J, the HTTB was lost in a fatal accident on 3 February 1993, at Dobbins Air Reserve Base, in Marietta, Georgia. The crash was attributed to disengagement of the rudder fly-by-wire flight control system, resulting in a total loss of rudder control capability while conducting ground minimum control speed tests (Vmcg). The disengagement was a result of the inadequate design of the rudder's integrated actuator package by its manufacturer; the operator's insufficient system safety review failed to consider the consequences of the inadequate design to all operating regimes. A factor that contributed to the accident was the flight crew's lack of engineering flight test training. In the 1990s, the improved C-130J Super Hercules was developed by Lockheed (later Lockheed Martin). This model is the newest version and the only model in production. Externally similar to the classic Hercules in general appearance, the J model has new turboprop engines, six-bladed propellers, digital avionics, and other new systems. Upgrades and changes In 2000, Boeing was awarded a contract to develop an Avionics Modernization Program kit for the C-130. The program was beset with delays and cost overruns until project restructuring in 2007. In September 2009, it was reported that the planned Avionics Modernization Program (AMP) upgrade to the older C-130s would be dropped to provide more funds for the F-35, CV-22 and airborne tanker replacement programs. However, in June 2010, Department of Defense approved funding for the initial production of the AMP upgrade kits."Boeing C-130 Avionics Modernization Program to Enter Production". Boeing, 24 June 2010. Under the terms of this agreement, the USAF has cleared Boeing to begin low-rate initial production (LRIP) for the C-130 AMP. A total of 198 aircraft are expected to feature the AMP upgrade. The current cost per aircraft is , although Boeing expects that this price will drop to US$7 million for the 69th aircraft. In the 2000s, Lockheed Martin and the U.S. Air Force began outfitting and retrofitting C-130s with the eight-blade UTC Aerospace Systems NP2000 propellers. An engine enhancement program saving fuel and providing lower temperatures in the T56 engine has been approved, and the US Air Force expects to save $2 billion (~$ in ) and extend the fleet life. In 2021, the Air Force Research Laboratory demonstrated the Rapid Dragon system which transforms the C-130 into a lethal strike platform capable of launching 12 JASSM-ER with 500 kg warheads from a standoff distance of . Future anticipated improvements support includes support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when JASSM-XR become available in 2024. Replacement In October 2010, the U.S. Air Force released a capability request for information (CRFI) for the development of a new airlifter to replace the C-130. The new aircraft was to carry a 190% greater payload and assume the mission of mounted vertical maneuver (MVM). The greater payload and mission would enable it to carry medium-weight armored vehicles and unload them at locations without long runways. Various options were under consideration, including new or upgraded fixed-wing designs, rotorcraft, tiltrotors, or even an airship. The C-130 fleet of around 450 planes would be replaced by only 250 aircraft. The Air Force had attempted to replace the C-130 in the 1970s through the Advanced Medium STOL Transport project, which resulted in the C-17 Globemaster III that instead replaced the C-141 Starlifter. The Air Force Research Laboratory funded Lockheed Martin and Boeing demonstrators for the Speed Agile concept, which had the goal of making a STOL aircraft that could take off and land at speeds as low as on airfields less than long and cruise at Mach 0.8-plus. Boeing's design used upper-surface blowing from embedded engines on the inboard wing and blown flaps for circulation control on the outboard wing. Lockheed's design also used blown flaps outboard, but inboard used patented reversing ejector nozzles. Boeing's design completed over 2,000 hours of wind tunnel tests in late 2009. It was a 5 percent-scale model of a narrow body design with a payload. When the AFRL increased the payload requirement to , they tested a 5 percent-scale model of a widebody design with a take-off gross weight and an "A400M-size" wide cargo box. It would be powered by four IAE V2533 turbofans. In August 2011, the AFRL released pictures of the Lockheed Speed Agile concept demonstrator. A 23% scale model went through wind tunnel tests to demonstrate its hybrid powered lift, which combined a low drag airframe with simple mechanical assembly to reduce weight and improve aerodynamics. The model had four engines, including two Williams FJ44 turbofans.Lockheed's New STOL Airlifter Design – Defensetech.org, 15 September 2011 On 26 March 2013, Boeing was granted a patent for its swept-wing powered lift aircraft. In January 2014, Air Mobility Command, Air Force Materiel Command and the Air Force Research Lab were in the early stages of defining requirements for the C-X next generation airlifter program to replace both the C-130 and C-17. The aircraft would be produced from the early 2030s to the 2040s. Operational history Military The first production batch of C-130A aircraft were delivered beginning in 1956 to the 463d Troop Carrier Wing at Ardmore AFB, Oklahoma, and the 314th Troop Carrier Wing at Sewart AFB, Tennessee. Six additional squadrons were assigned to the 322d Air Division in Europe and the 315th Air Division in the Far East. Additional aircraft were modified for electronics intelligence work and assigned to Rhein-Main Air Base, Germany while modified RC-130As were assigned to the Military Air Transport Service (MATS) photo-mapping division. The C-130A entered service with the U.S. Air Force in December 1956. In 1958, a U.S. reconnaissance C-130A-II of the 7406th Support Squadron was shot down over Armenia by four Soviet MiG-17s along the Turkish-Armenian border during a routine mission. Australia became the first non-American operator of the Hercules with 12 examples being delivered from late 1958. The Royal Canadian Air Force became another early user with the delivery of four B-models (Canadian designation CC-130 Mk I) in October / November 1960. In 1963, a Hercules achieved and still holds the record for the largest and heaviest aircraft to land on an aircraft carrier. During October and November that year, a USMC KC-130F (BuNo 149798), loaned to the U.S. Naval Air Test Center, made 29 touch-and-go landings, 21 unarrested full-stop landings and 21 unassisted take-offs on at a number of different weights. The pilot, Lieutenant (later Rear Admiral) James H. Flatley III, USN, was awarded the Distinguished Flying Cross for his role in this test series. The tests were highly successful, but the aircraft was not deployed this way. Flatley denied that C-130 was tested for carrier onboard delivery (COD) operations, or for delivering nuclear weapons. He said that the intention was to support the Lockheed U-2, also being tested on carriers. The Hercules used in the test, most recently in service with Marine Aerial Refueler Squadron 352 (VMGR-352) until 2005, is now part of the collection of the National Museum of Naval Aviation at NAS Pensacola, Florida. In 1964, C-130 crews from the 6315th Operations Group at Naha Air Base, Okinawa commenced forward air control (FAC; "Flare") missions over the Ho Chi Minh Trail in Laos supporting USAF strike aircraft. In April 1965 the mission was expanded to North Vietnam where C-130 crews led formations of Martin B-57 Canberra bombers on night reconnaissance/strike missions against communist supply routes leading to South Vietnam. In early 1966 Project Blind Bat/Lamplighter was established at Ubon Royal Thai Air Force Base, Thailand. After the move to Ubon, the mission became a four-engine FAC mission with the C-130 crew searching for targets and then calling in strike aircraft. Another little-known C-130 mission flown by Naha-based crews was Operation Commando Scarf (or Operation Commando Lava), which involved the delivery of chemicals onto sections of the Ho Chi Minh Trail in Laos that were designed to produce mud and landslides in hopes of making the truck routes impassable. In November 1964, on the other side of the globe, C-130Es from the 464th Troop Carrier Wing but loaned to 322d Air Division in France, took part in Operation Dragon Rouge, one of the most dramatic missions in history in the former Belgian Congo. After communist Simba rebels took white residents of the city of Stanleyville hostage, the U.S. and Belgium developed a joint rescue mission that used the C-130s to drop, air-land, and air-lift a force of Belgian paratroopers to rescue the hostages. Two missions were flown, one over Stanleyville and another over Paulis during Thanksgiving week. The headline-making mission resulted in the first award of the prestigious MacKay Trophy to C-130 crews. In the Indo-Pakistani War of 1965, the No. 6 Transport Squadron of the Pakistan Air Force modified its C-130Bs for use as bombers to carry up to of bombs on pallets. These improvised bombers were used to hit Indian targets such as bridges, heavy artillery positions, tank formations, and troop concentrations, though weren't that successful .Group Captain (Retd) Sultan M Hali's "PAF's Gallant Christian Heroes Carry Quaid's Message" Defence Journal, December 1998. Retrieved 5 September 2015. In October 1968, a C-130Bs from the 463rd Tactical Airlift Wing dropped a pair of M-121 bombs that had been developed for the massive Convair B-36 Peacemaker bomber but had never been used. The U.S. Army and U.S. Air Force resurrected the huge weapons as a means of clearing landing zones for helicopters and in early 1969 the 463rd commenced Commando Vault missions. Although the stated purpose of Commando Vault was to clear LZs, they were also used on enemy base camps and other targets. During the late 1960s, the U.S. was eager to get information on Chinese nuclear capabilities. After the failure of the Black Cat Squadron to plant operating sensor pods near the Lop Nur Nuclear Weapons Test Base using a U-2, the CIA developed a plan, named Heavy Tea, to deploy two battery-powered sensor pallets near the base. To deploy the pallets, a Black Bat Squadron crew was trained in the U.S. to fly the C-130 Hercules. The crew of 12, led by Col Sun Pei Zhen, took off from Takhli Royal Thai Air Force Base in an unmarked U.S. Air Force C-130E on 17 May 1969. Flying for six and a half hours at low altitude in the dark, they arrived over the target and the sensor pallets were dropped by parachute near Anxi in Gansu province. After another six and a half hours of low-altitude flight, they arrived back at Takhli. The sensors worked and uploaded data to a U.S. intelligence satellite for six months before their batteries failed. The Chinese conducted two nuclear tests, on 22 September 1969 and 29 September 1969, during the operating life of the sensor pallets. Another mission to the area was planned as Operation Golden Whip, but it was called off in 1970. It is most likely that the aircraft used on this mission was either C-130E serial number 64-0506 or 64-0507 (cn 382-3990 and 382–3991). These two aircraft were delivered to Air America in 1964. After being returned to the U.S. Air Force sometime between 1966 and 1970, they were assigned the serial numbers of C-130s that had been destroyed in accidents. 64-0506 is now flying as 62–1843, a C-130E that crashed in Vietnam on 20 December 1965, and 64-0507 is now flying as 63–7785, a C-130E that had crashed in Vietnam on 17 June 1966. The A-model continued in service through the Vietnam War, where the aircraft assigned to the four squadrons at Naha AB, Okinawa, and one at Tachikawa Air Base, Japan performed yeoman's service, including operating highly classified special operations missions such as the BLIND BAT FAC/Flare mission and Fact Sheet leaflet mission over Laos and North Vietnam. The A-model was also provided to the Republic of Vietnam Air Force as part of the Vietnamization program at the end of the war, and equipped three squadrons based at Tan Son Nhut Air Base. The last operator in the world is the Honduran Air Force, which is still flying one of five A model Hercules (FAH 558, c/n 3042) as of October 2009. As the Vietnam War wound down, the 463rd Troop Carrier/Tactical Airlift Wing B-models and A-models of the 374th Tactical Airlift Wing were transferred back to the United States where most were assigned to Air Force Reserve and Air National Guard units. Another prominent role for the B model was with the United States Marine Corps, where Hercules initially designated as GV-1s replaced C-119s. After Air Force C-130Ds proved the type's usefulness in Antarctica, the U.S. Navy purchased several B-models equipped with skis that were designated as LC-130s. C-130B-II electronic reconnaissance aircraft were operated under the SUN VALLEY program name primarily from Yokota Air Base, Japan. All reverted to standard C-130B cargo aircraft after their replacement in the reconnaissance role by other aircraft. The C-130 was also used in the 1976 Entebbe raid in which Israeli commando forces performed a surprise operation to rescue 103 passengers of an airliner hijacked by Palestinian and German terrorists at Entebbe Airport, Uganda. The rescue force—200 soldiers, jeeps, and a black Mercedes-Benz (intended to resemble Ugandan Dictator Idi Amin's vehicle of state)—was flown over almost entirely at an altitude of less than from Israel to Entebbe by four Israeli Air Force (IAF) Hercules aircraft without mid-air refueling (on the way back, the aircraft refueled in Nairobi, Kenya). During the Falklands War () of 1982, Argentine Air Force C-130s undertook dangerous re-supply night flights as blockade runners to the Argentine garrison on the Falkland Islands. They also performed daylight maritime survey flights. One was shot down by a Royal Navy Sea Harrier using AIM-9 Sidewinders and cannon. The crew of seven were killed. Argentina also operated two KC-130 tankers during the war, and these refueled both the Douglas A-4 Skyhawks and Navy Dassault-Breguet Super Étendards; some C-130s were modified to operate as bombers with bomb-racks under their wings. The British also used RAF C-130s to support their logistical operations. During the Gulf War of 1991 (Operation Desert Storm), the C-130 Hercules was used operationally by the U.S. Air Force, U.S. Navy, and U.S. Marine Corps, along with the air forces of Australia, New Zealand, Saudi Arabia, South Korea, and the UK. The MC-130 Combat Talon variant also made the first attacks using the largest conventional bombs in the world, the BLU-82 "Daisy Cutter" and GBU-43/B "Massive Ordnance Air Blast" (MOAB) bomb. Daisy Cutters were used to primarily clear landing zones and to eliminate mine fields. The weight and size of the weapons make it impossible or impractical to load them on conventional bombers. The GBU-43/B MOAB is a successor to the BLU-82 and can perform the same function, as well as perform strike functions against hardened targets in a low air threat environment. Since 1992, two successive C-130 aircraft named Fat Albert have served as the support aircraft for the U.S. Navy Blue Angels flight demonstration team. Fat Albert I was a TC-130G (151891) a former U.S. Navy TACAMO aircraft serving with Fleet Air Reconnaissance Squadron Three (VQ-3) before being transferred to the BLUES, while Fat Albert II is a C-130T (164763). Although Fat Albert supports a Navy squadron, it is operated by the U.S. Marine Corps (USMC) and its crew consists solely of USMC personnel. At some air shows featuring the team, Fat Albert takes part, performing flyovers. Until 2009, it also demonstrated its rocket-assisted takeoff (RATO) capabilities; these ended due to dwindling supplies of rockets. The AC-130 also holds the record for the longest sustained flight by a C-130. From 22 to 24 October 1997, two AC-130U gunships flew 36 hours nonstop from Hurlburt Field, Florida to Daegu International Airport, South Korea, being refueled seven times by KC-135 tanker aircraft. This record flight beat the previous record longest flight by over 10 hours and the two gunships took on of fuel. The gunship has been used in every major U.S. combat operation since Vietnam, except for Operation El Dorado Canyon, the 1986 attack on Libya. During the invasion of Afghanistan in 2001 and the ongoing support of the International Security Assistance Force (Operation Enduring Freedom), the C-130 Hercules has been used operationally by Australia, Belgium, Canada, Denmark, France, Italy, the Netherlands, New Zealand, Norway, Portugal, Romania, South Korea, Spain, the UK, and the United States. During the 2003 invasion of Iraq (Operation Iraqi Freedom), the C-130 Hercules was used operationally by Australia, the UK, and the United States. After the initial invasion, C-130 operators as part of the Multinational force in Iraq used their C-130s to support their forces in Iraq. Since 2004, the Pakistan Air Force has employed C-130s in the War in North-West Pakistan. Some variants had forward looking infrared (FLIR Systems Star Safire III EO/IR) sensor balls, to enable close tracking of militants. In 2017, France and Germany announced that they are to build up a joint air transport squadron at Evreux Air Base, France, comprising ten C-130J aircraft. Six of these will be operated by Germany. Initial operational capability is expected for 2021 while full operational capability is scheduled for 2024. The Argentine Air Force has five C-130H aircraft that are part of a US-funded security assistance donation. The US has been leasing the aircraft to the Argentine Air Force through the Georgia Air National Guard since June 2023. Deepwater Horizon Oil Spill For almost two decades, the USAF 910th Airlift Wing's 757th Airlift Squadron and the U.S. Coast Guard have participated in oil spill cleanup exercises to ensure the U.S. military has a capable response in the event of a national emergency. The 757th Airlift Squadron operates the DOD's only fixed-wing Aerial Spray System which was certified by the EPA to disperse pesticides on DOD property to spread oil dispersants onto the Deepwater Horizon oil spill in the Gulf Coast in 2010. During the 5-week mission, the aircrews flew 92 sorties and sprayed approximately 30,000 acres with nearly 149,000 gallons of oil dispersant to break up the oil. The Deepwater Horizon mission was the first time the US used the oil dispersing capability of the 910th Airlift Wing—its only large area, fixed-wing aerial spray program—in an actual spill of national significance. The Air Force Reserve Command announced the 910th Airlift Wing has been selected as a recipient of the Air Force Outstanding Unit Award for its outstanding achievement from 28 April 2010 through 4 June 2010. Hurricane Harvey (2017) C-130s temporarily based at Kelly Field conducted mosquito control aerial spray applications over areas of eastern Texas devastated by Hurricane Harvey. This special mission treated more than 2.3 million acres at the direction of Federal Emergency Management Agency (FEMA) and the Texas Department of State Health Services (DSHS) to assist in recovery efforts by helping contain the significant increase in pest insects caused by large amounts of standing, stagnant water. The 910th Airlift Wing operates the Department of Defense's only aerial spray capability to control pest insect populations, eliminate undesired and invasive vegetation, and disperse oil spills in large bodies of water. The aerial spray flight also is now able to operate during the night with NVGs, which increases the flight's best case spray capacity from approximately 60 thousand acres per day to approximately 190 thousand acres per day. Spray missions are normally conducted at dusk and nighttime hours when pest insects are most active, the U.S. Air Force Reserve reports. Aerial firefighting In the early 1970s, Congress authorized the Modular Airborne Firefighting System (MAFFS), a joint operation between the U.S. Forest Service and the Department of Defense. MAFFS is roll-on/roll-off device that allows C-130s to be temporarily converted into a 3,000-gallon airtanker for fighting wildfires when demand exceeds the supply of privately contracted and publicly available airtankers. In the late 1980s, 22 retired USAF C-130As were removed from storage and transferred to the U.S. Forest Service, which then transferred them to six private companies to be converted into airtankers. One of these C-130s crashed in June 2002 while operating near Walker, California. The crash was attributed to wing separation caused by fatigue stress cracking and contributed to the grounding of the entire large aircraft fleet. After an extensive review, US Forest Service and the Bureau of Land Management declined to renew the leases on nine C-130A over concerns about the age of the aircraft, which had been in service since the 1950s, and their ability to handle the forces generated by aerial firefighting. More recently, an updated Retardant Aerial Delivery System known as RADS XL was developed by Coulson Aviation USA. That system consists of a C-130H/Q retrofitted with an in-floor discharge system, combined with a removable 3,500- or 4,000-gallon water tank. The combined system is FAA certified. On 23 January 2020, Coulson's Tanker 134, an EC-130Q registered N134CG, crashed during aerial firefighting operations in New South Wales, Australia, killing all three crew members. The aircraft had taken off out of RAAF Base Richmond and was supporting firefighting operations during Australia's 2019–20 fire season. Variants Significant military variants of the C-130 include: C-130A Initial production model with four Allison T56-A-11/9 turboprop engines. 219 were ordered and deliveries to the USAF began in December 1956. C-130B Variant with four Allison T56-A-7 engines. 134 were ordered and entered USAF service in May 1959. C-130E Same engines as the C-130B but with two external fuel tanks, and an increased maximum takeoff weight capability. Introduced in August 1962 with 389 were ordered. C-130F/G Variants procured by the U.S. Navy for Marine Corps refueling missions, and other support/transport operations. C-130H Identical to the C-130E but with more powerful Allison T56-A-15 turboprop engines. Introduced in June 1964 with 308 ordered. C-130K Designation for RAF Hercules C1/W2/C3 aircraft (C-130Js in RAF service are the Hercules C.4 and Hercules C.5) C-130T Improved variants procured by the U.S. Navy for Marine Corps refueling, and other support/transport operations. C-130A-II Dreamboat Early version Electronic Intelligence/Signals Intelligence (ELINT/SIGINT) aircraft C-130J Super Hercules Tactical airlifter, with new engines, avionics, and updated systems C-130B BLC A one-off conversion of C-130B 58–0712, modified with a double Allison YT56 gas generator pod under each outer wing, to provide bleed air for all the control surfaces and flaps. AC-130A/E/H/J/U/W Gunship variants C-130D/D-6 Ski-equipped version for snow and ice operations United States Air Force / Air National Guard CC-130E/H/J Hercules Designation for Canadian Armed Forces / Royal Canadian Air Force Hercules aircraft. U.S. Air Force used the CC-130J designation to differentiate the standard C-130J variant from the "stretched" C-130J (company designation C-130J-30). CC-130H(T) is the Canadian tanker variant of the KC-130H. C-130M Designation used by the Brazilian Air Force for locally modified C-130H aircraft. DC-130A/E/H USAF and USN Drone control EC-130 EC-130E/J Commando Solo – USAF / Air National Guard psychological operations version EC-130E Airborne Battlefield Command and Control Center (ABCCC) – USAF procedural air-to-ground attack control, also provided NRT threat updates EC-130E Rivet Rider – Airborne psychological warfare aircraft EC-130H Compass Call – Electronic warfare and electronic attack. EC-130V – Airborne early warning and control (AEW&C) variant used by USCG for counter-narcotics missions GC-130 Permanently grounded instructional airframes HC-130 HC-130B/E/H – Early model combat search and rescue HC-130P/N Combat King – USAF aerial refueling tanker and combat search and rescue HC-130J Combat King II – Next generation combat search and rescue tanker HC-130H/J – USCG long-range surveillance and search and rescue, USAFR Aerial Spray & Airlift JC-130 Temporary conversion for flight test operations; used to recover drones and spy satellite film capsules. KC-130F/R/T/J United States Marine Corps aerial refueling tanker and tactical airlifter LC-130F/H/R USAF / Air National Guard – Ski-equipped version for Arctic and Antarctic support operations; LC-130F and R previously operated by USN MC-130 MC-130E/H Combat Talon I/II – Special operations infiltration/extraction variant MC-130W Combat Spear/Dragon Spear – Special operations tanker/gunship MC-130P Combat Shadow – Special operations tanker – all operational aircraft converted to HC-130P standard MC-130J Commando II (formerly Combat Shadow II) – Special operations tanker Air Force Special Operations Command YMC-130H – Modified aircraft under Operation Credible Sport for second Iran hostage crisis rescue attempt NC-130 Permanent conversion for flight test operations PC-130/C-130-MP Maritime patrol RC-130A/S Surveillance aircraft for reconnaissance SC-130J Sea Herc Proposed maritime patrol version of the C-130J, designed for coastal surveillance and anti-submarine warfare. TC-130 Aircrew training VC-130H VIP transport WC-130A/B/E/H/J Weather reconnaissance ("Hurricane Hunter") version for USAF / Air Force Reserve Command's 53d Weather Reconnaissance Squadron in support of the National Weather Service's National Hurricane Center C-130(EM/BM) Erciyes Turkey's Erciyes modernization program covers modernization of the avionics of C-130B/E variants of the aircraft. In scope of modernization the aircraft is equipped with Digital Cockpit (four-color Multifunctional Display with moving map capability-MFD), two Central Display Units (CDU) and two multifunction Central Control Computers compatible with international navigational requirements, as well as with a multifunction Mission Computer with high operational capability, Flight Management System (FMS), Link-16, Ground Mission Planning Unit compatible with the Air Force Information System, and display and lighting systems compatible with Night Vision Goggles. Other components such as GPS, indicator, anti-collision system, air radar, advanced military and civilian navigation systems, night-time invisible lighting for military missions, black box voice recorder, communication systems, advanced automated flight systems (military and civilian), systems enabling operation in the military network, digital moving map and ground mission planning systems are also included. Operators Former operators Accidents The C-130 Hercules has had a low accident rate in general. The Royal Air Force recorded an accident rate of about one aircraft loss per 250,000 flying hours over the last 40 years, placing it behind Vickers VC10s and Lockheed TriStars with no flying losses. USAF C-130A/B/E-models had an overall attrition rate of 5% as of 1989 as compared to 1–2% for commercial airliners in the U.S., according to the NTSB, 10% for B-52 bombers, and 20% for fighters (F-4, F-111), trainers (T-37, T-38), and helicopters (H-3). Aircraft on display Argentina C-130B FAA TC-60. ex USAF 61-0964 received in February 1992 now at Museo Nacional de Aeronáutica since September 2011. Australia C-130A RAAF A97-214 used by 36 Squadron from early 1959, withdrawn from use late 1978. Stored at RAAF Museum, RAAF Base Williams, Point Cook. Airframe scrapped in February 2022. Cockpit section preserved and gifted to National Vietnam Veterans Museum, Phillip Island. C-130E RAAF A97-160 used by 37 Squadron from August 1966, withdrawn from use November 2000; to RAAF Museum, 14 November 2000, cocooned as of September 2005. C-130H A97-011 delivered in October 1978, withdrawn from use December 2012 to RAAF Museum, Point Cook where it is currently on display. Belgium C-130H Belgian Air Component tailnumber CH13 in service from 2009 until May 2021 is on display at the Beauvechain Air Base at the First Wing Historical Center. Brazil C-130H Brazilian Air Force FAB-2453 is on display at the Museu Aeroespacial in Rio de Janeiro since 2014. Canada CC-130E RCAF 10313 (later 130313) is on display at the National Air Force Museum of Canada, CFB Trenton CC-130E RCAF 10307 (later 130307) is on display in the Reserve Hangar at the Canada Aviation and Space Museum, Ottawa, Ontario CC-130E RCAF 130328 is on display at the Greenwood Aviation Museum, CFB Greenwood Colombia C-130B FAC 1010 (serial number 3521) moved on 14 January 2016 to the Colombian Aerospace Museum in Tocancipá, Cundinamarca, for static display. C-130B FAC1011 (serial number 3585, ex 59–1535) preserved at the Colombian Air and Space Museum within CATAM AFB, Bogotá. Indonesia C-130B Indonesian Air Force A-1301 preserved at Sulaeman Airstrip, Bandung. Also occasionally used for Paskhas Training. The airplane is relocated to Air Force Museum in Yogyakarta in 2017. Norway C-130H Royal Norwegian Air Force 953 was retired on 10 June 2007 and moved to the Air Force museum at Oslo Gardermoen in May 2008. Philippines C-130B 4512 Philippine Air Force on display at Mactan Air Base aircraft park. Saudi Arabia C-130H RSAF 460 was operated by 4 Squadron Royal Saudi Air Force from December 1974 until January 1987. It was damaged in a fire at Jeddah in December 1989. Restored for ground training by August 1993. At Royal Saudi Air Force Museum, November 2002, restored for ground display by using a tail from another C-130H. United Kingdom Hercules C3 XV202 that served with the Royal Air Force from 1967 to 2011, is on display at the Royal Air Force Museum Cosford. United States GC-130A, AF Ser. No. 55-037 used by the 773 TCS, 483 TCW, 315 AD, 374 TCW, 815 TAS, 35 TAS, 109 TAS, belly-landed at Duluth, Minnesota, April 1973, repaired; 167 TAS, 180 TAS, to Chanute Technical Training Center as GC-130A, May 1984; now displayed at Museum of Missouri Military History, Missouri National Guard Ike Skelton Training Center, Jefferson City, Missouri. Previously displayed at Octave Chanute Aerospace Museum, (former) Chanute AFB, Rantoul, Illinois until museum closed. C-130A, AF Ser. No. 56-0518 used by the 314 TCW, 315 AD, 41 ATS, 328 TAS; to Republic of Vietnam Air Force 435 Transport Squadron, November 1972; holds the C-130 record for taking off with the most personnel on board, during the evacuation of SVN, 29 April 1975, with 452. Returned to USAF, 185 TAS, 105 TAS; Flown to Little Rock AFB on 28 June 1989. It was converted to a static display at the LRAFB Visitor Center, Arkansas by Sept. 1989. C-130A, AF Ser. No. 57-0453 was operated from 1958 to 1991, last duty with 155th TAS, 164th TAG, Tennessee Air National Guard, Memphis International Airport/ANGB, Tennessee, 1976–1991, named "Nite Train to Memphis"; to AMARC in December 1991, then sent to Texas for modification into a replica of C-130A-II Dreamboat aircraft, AF Ser. No. 56-0528, shot down by Soviet fighters in Soviet airspace near Yerevan, Armenia on 2 September 1958, while on ELINT mission with loss of all crew, displayed in National Vigilance Park, National Security Agency grounds, Fort George Meade, Maryland. C-130B, AF Ser. No. 59-0528 was operated by 145th Airlift Wing, North Carolina Air National Guard; placed on static display at Charlotte Air National Guard Base, North Carolina in 2010. C-130D, AF Ser. No. 57-0490 used by the 61st TCS, 17th TCS, 139th TAS with skis, July 1975 – April 1983; to MASDC, 1984–1985, GC-130D ground trainer, Chanute AFB, Illinois, 1986–1990; When Chanute AFB closed in September 1993, it moved to the Octave Chanute Aerospace Museum (former Chanute AFB), Rantoul, Illinois. In July 1994, it moved to the Empire State Aerosciences Museum, Schenectady County Airport, New York, until placed on the gate at Stratton Air National Guard Base in October 1994. NC-130B, AF Ser. No. 57-0526 was the second B model manufactured, initially delivered as JC-130B; assigned to 6515th Organizational Maintenance Squadron for flight testing at Edwards AFB, California on 29 November 1960; turned over to 6593rd Test Squadron's Operating Location No. 1 at Edwards AFB and spent next seven years supporting Corona Program; "J" status and prefix removed from aircraft in October 1967; transferred to 6593rd Test Squadron at Hickam AFB, Hawaii and modified for mid-air retrieval of satellites; acquired by 6514th Test Squadron at Hill AFB, Utah in Jan. 1987 and used as electronic testbed and cargo transport; aircraft retired January 1994 with 11,000+ flight hours and moved to Hill Aerospace Museum at Hill AFB by January 1994. C-130E, AF Ser. No. 62-1787, on display at the National Museum of the United States Air Force, Wright-Patterson AFB, Ohio, was flown to the museum on 18 August 2011. One of the greatest feats of heroism during the Vietnam War involved the C-130E, call sign "Spare 617". The C-130E attempted to airdrop ammunition to surround South Vietnamese forces at An Loc, Vietnam. Approaching the drop zone, Spare 617 received heavy enemy ground fire that damaged two engines, ruptured a bleed air duct in the cargo compartment, and set the ammunition on fire. Flight engineer TSgt Sanders was killed, and navigator 1st Lt Lenz and co-pilot 1st Lt Hering were both wounded. Despite receiving severe burns from hot air escaping from the damaged air bleed duct, loadmaster TSgt Shaub extinguished a fire in the cargo compartment, and successfully jettisoned the cargo pallets, which exploded in mid-air. Despite losing a third engine on the final approach, pilot Capt Caldwell landed Spare 617 safely. For their actions, Caldwell and Shaub received the Air Force Cross, the U.S. Air Force's second highest award for valor. TSgt Shaub also received the William H. Pitsenbarger Award for Heroism from the Air Force Sergeants Association. KC-130F, USN/USMC BuNo 149798 used in tests in October–November 1963 by the U.S. Navy for unarrested landings and unassisted take-offs from the carrier USS Forrestal (CV-59), it remains the record holder for largest aircraft to operate from a carrier flight deck, and carried the name "Look Ma, No Hook" during the tests. Retired to the National Museum of Naval Aviation, NAS Pensacola, Florida in May 2003. C-130G, USN/USMC BuNo 151891; modified to EC-130G, 1966, then testbed for EC-130Q TACAMO in 1981, then changed to TC-130G and used by Fleet Air Reconnaissance Squadron Three (VQ-3) for flight proficiency (bounce bird). In early 1991 it was transferred to AMMARG Davis-Monthan AFB Tucson, AZ. In May 1991 it was assigned as the U.S. Navy's Blue Angels USMC support aircraft, serving as "Fat Albert Airlines" from 1991 to 2002. Retired to the National Museum of Naval Aviation at NAS Pensacola, Florida in November 2002 where it remains on outside static display reflecting the BLUES colors. C-130E, AF Ser. No. 64-0525 was on display at the 82nd Airborne Division War Memorial Museum at Fort Liberty, North Carolina. The aircraft was the last assigned to the 43rd AW at Pope AFB, North Carolina before retirement from the USAF. C-130E-LM, AF Ser. No. 64-0533 – Taken in December 1964 by 314th Troop Carrier Wing, Sewart AFB, TN. Last assigned to 37th Airlift Squadron, Rhein-Main AB, Germany. Transferred to Elmendorf AFB for display, May 2004. Marked as 53-2453. C-130E, AF Ser. No. 69-6579 operated by the 61st TAS, 314th TAW, 50th AS, 61st AS; at Dyess AFB as maintenance trainer as GC-130E, March 1998; to Dyess AFB Linear Air Park, January 2004. MC-130E Combat Talon I, AF Ser. No. 64-0567, unofficially known as "Wild Thing". It transported captured Panamanian dictator Manuel Noriega in 1989 during Operation Just Cause and participated in Operation Eagle Claw, the unsuccessful attempt to rescue U.S. hostages from Iran in 1980. Wild Thing was also the first fixed-wing aircraft to employ night-vision goggles. On display at Hurlburt Field, in Florida. C-130E, AF Ser. No. 69-6580 operated by the 61st TAS, 314th TAW, 317th TAW, 314th TAW, 317th TAW, 40th AS, 41st AS, 43rd AW, retired after center wing cracks were detected in April 2002; to the Air Mobility Command Museum, Dover AFB, Delaware on 2 February 2004. C-130E, AF Ser. No. 70-1269 was used by the 43rd AW and is on display at the Pope Air Park, Pope AFB, North Carolina as of 2006. C-130H, AF Ser. No. 74-1686 used by the 463rd TAW; one of three C-130H airframes modified to YMC-130H for an aborted rescue attempt of Iranian hostages, Operation Credible Sport, with rocket packages blistered onto fuselage in 1980, but these were removed after the mission was canceled. Subsequent duty with the 4950th Test Wing, then donated to the Museum of Aviation at Robins AFB, Georgia, in March 1988. C-130H, AF Ser. No. 88-4401 operated by the Ohio 179th Airlift Wing has been retired and is on display at the MAPS Air Museum in Canton, Ohio Specifications (C-130H)
Technology
Specific aircraft
null
7701
https://en.wikipedia.org/wiki/Cocaine
Cocaine
Cocaine (, , ultimately ) is a tropane alkaloid that acts as a central nervous system stimulant. As an extract, it is mainly used recreationally and often illegally for its euphoric and rewarding effects. It is also used in medicine by Indigenous South Americans for various purposes and rarely, but more formally, as a local anaesthetic or diagnostic tool by medical practitioners in more developed countries. It is primarily obtained from the leaves of two Coca species native to South America: Erythroxylum coca and E. novogranatense. After extraction from the plant, and further processing into cocaine hydrochloride (powdered cocaine), the drug is administered by being either snorted, applied topically to the mouth, or dissolved and injected into a vein. It can also then be turned into free base form (typically crack cocaine), in which it can be heated until sublimated and then the vapours can be inhaled. Cocaine stimulates the mesolimbic pathway in the brain. Mental effects may include an intense feeling of happiness, sexual arousal, loss of contact with reality, or agitation. Physical effects may include a fast heart rate, sweating, and dilated pupils. High doses can result in high blood pressure or high body temperature. Onset of effects can begin within seconds to minutes of use, depending on method of delivery, and can last between five and ninety minutes. As cocaine also has numbing and blood vessel constriction properties, it is occasionally used during surgery on the throat or inside of the nose to control pain, bleeding, and vocal cord spasm. Cocaine crosses the blood–brain barrier via a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. Cocaine blocks the dopamine transporter, inhibiting reuptake of dopamine from the synaptic cleft into the pre-synaptic axon terminal; the higher dopamine levels in the synaptic cleft increase dopamine receptor activation in the post-synaptic neuron, causing euphoria and arousal. Cocaine also blocks the serotonin transporter and norepinephrine transporter, inhibiting reuptake of serotonin and norepinephrine from the synaptic cleft into the pre-synaptic axon terminal and increasing activation of serotonin receptors and norepinephrine receptors in the post-synaptic neuron, contributing to the mental and physical effects of cocaine exposure. A single dose of cocaine induces tolerance to the drug's effects. Repeated use is likely to result in addiction. Addicts who abstain from cocaine may experience prolonged craving lasting for many months. Abstaining addicts also experience modest drug withdrawal symptoms lasting up to 24 hours, with sleep disruption, anxiety, irritability, crashing, depression, decreased libido, decreased ability to feel pleasure, and fatigue being common. Use of cocaine increases the overall risk of death, and intravenous use potentially increases the risk of trauma and infectious diseases such as blood infections and HIV through the use of shared paraphernalia. It also increases risk of stroke, heart attack, cardiac arrhythmia, lung injury (when smoked), and sudden cardiac death. Illicitly sold cocaine can be adulterated with fentanyl, local anesthetics, levamisole, cornstarch, quinine, or sugar, which can result in additional toxicity. In 2017, the Global Burden of Disease study found that cocaine use caused around 7,300 deaths annually. Uses Coca leaves have been used by Andean civilizations since ancient times. In ancient Wari culture, Inca culture, and through modern successor indigenous cultures of the Andes mountains, coca leaves are chewed, taken orally in the form of a tea, or alternatively, prepared in a sachet wrapped around alkaline burnt ashes, and held in the mouth against the inner cheek; it has traditionally been used to combat the effects of cold, hunger, and altitude sickness. Cocaine was first isolated from the leaves in 1860. Globally, in 2019, cocaine was used by an estimated 20 million people (0.4% of adults aged 15 to 64 years). The highest prevalence of cocaine use was in Australia and New Zealand (2.1%), followed by North America (2.1%), Western and Central Europe (1.4%), and South and Central America (1.0%). Since 1961, the Single Convention on Narcotic Drugs has required countries to make recreational use of cocaine a crime. In the United States, cocaine is regulated as a Schedule II drug under the Controlled Substances Act, meaning that it has a high potential for abuse but has an accepted medical use. While rarely used medically today, its accepted uses are as a topical local anesthetic for the upper respiratory tract as well as to reduce bleeding in the mouth, throat and nasal cavities. Medical Cocaine eye drops are frequently used by neurologists when examining people suspected of having Horner syndrome. In Horner syndrome, sympathetic innervation to the eye is blocked. In a healthy eye, cocaine will stimulate the sympathetic nerves by inhibiting norepinephrine reuptake, and the pupil will dilate; if the patient has Horner syndrome, the sympathetic nerves are blocked, and the affected eye will remain constricted or dilate to a lesser extent than the opposing (unaffected) eye which also receives the eye drop test. If both eyes dilate equally, the patient does not have Horner syndrome. Topical cocaine is sometimes used as a local numbing agent and vasoconstrictor to help control pain and bleeding with surgery of the nose, mouth, throat or lacrimal duct. Although some absorption and systemic effects may occur, the use of cocaine as a topical anesthetic and vasoconstrictor is generally safe, rarely causing cardiovascular toxicity, glaucoma, and pupil dilation. Occasionally, cocaine is mixed with adrenaline and sodium bicarbonate and used topically for surgery, a formulation called Moffett's solution. Cocaine hydrochloride (Goprelto), an ester local anesthetic, was approved for medical use in the United States in December 2017, and is indicated for the introduction of local anesthesia of the mucous membranes for diagnostic procedures and surgeries on or through the nasal cavities of adults. Cocaine hydrochloride (Numbrino) was approved for medical use in the United States in January 2020. The most common adverse reactions in people treated with Goprelto are headache and epistaxis. The most common adverse reactions in people treated with Numbrino are hypertension, tachycardia, and sinus tachycardia. Recreational Cocaine is a central nervous system stimulant. Its effects can last from 15 minutes to an hour. The duration of cocaine's effects depends on the amount taken and the route of administration. Cocaine can be in the form of fine white powder and has a bitter taste. Crack cocaine is a smokeable form of cocaine made into small "rocks" by processing cocaine with sodium bicarbonate (baking soda) and water. Crack cocaine is referred to as "crack" because of the crackling sounds it makes when heated. Cocaine use leads to increases in alertness, feelings of well-being and euphoria, increased energy and motor activity, and increased feelings of competence and sexuality. Analysis of the correlation between the use of 18 various psychoactive substances shows that cocaine use correlates with other "party drugs" (such as ecstasy or amphetamines), as well as with heroin and benzodiazepines use, and can be considered as a bridge between the use of different groups of drugs. Coca leaves It is legal for people to use coca leaves in some Andean nations, such as Peru and Bolivia, where they are chewed, consumed in the form of tea, or are sometimes incorporated into food products. Coca leaves are typically mixed with an alkaline substance (such as lime) and chewed into a wad that is retained in the buccal pouch (mouth between gum and cheek, much the same as chewing tobacco is chewed) and sucked of its juices. The juices are absorbed slowly by the mucous membrane of the inner cheek and by the gastrointestinal tract when swallowed. Alternatively, coca leaves can be infused in liquid and consumed like tea. Coca tea, an infusion of coca leaves, is also a traditional method of consumption. The tea has often been recommended for travelers in the Andes to prevent altitude sickness. Its actual effectiveness has never been systematically studied. In 1986 an article in the Journal of the American Medical Association revealed that U.S. health food stores were selling dried coca leaves to be prepared as an infusion as "Health Inca Tea". While the packaging claimed it had been "decocainized", no such process had actually taken place. The article stated that drinking two cups of the tea per day gave a mild stimulation, increased heart rate, and mood elevation, and the tea was essentially harmless. Insufflation Nasal insufflation (known colloquially as "snorting", "sniffing", or "blowing") is a common method of ingestion of recreational powdered cocaine. The drug coats and is absorbed through the mucous membranes lining the nasal passages. Cocaine's desired euphoric effects are delayed when snorted through the nose by about five minutes. This occurs because cocaine's absorption is slowed by its constricting effect on the blood vessels of the nose. Insufflation of cocaine also leads to the longest duration of its effects (60–90 minutes). When insufflating cocaine, absorption through the nasal membranes is approximately 30–60% In a study of cocaine users, the average time taken to reach peak subjective effects was 14.6 minutes. Any damage to the inside of the nose is due to cocaine constricting blood vessels — and therefore restricting blood and oxygen/nutrient flow — to that area. Rolled up banknotes, hollowed-out pens, cut straws, pointed ends of keys, specialized spoons, long fingernails, and (clean) tampon applicators are often used to insufflate cocaine. The cocaine typically is poured onto a flat, hard surface (such as a mobile phone screen, mirror, CD case or book) and divided into "bumps", "lines" or "rails", and then insufflated. A 2001 study reported that the sharing of straws used to "snort" cocaine can spread blood diseases such as hepatitis C. Injection Subjective effects not commonly shared with other methods of administration include a ringing in the ears moments after injection (usually when over 120 milligrams) lasting 2 to 5 minutes including tinnitus and audio distortion. This is colloquially referred to as a "bell ringer". In a study of cocaine users, the average time taken to reach peak subjective effects was 3.1 minutes. The euphoria passes quickly. Aside from the toxic effects of cocaine, there is also the danger of circulatory emboli from the insoluble substances that may be used to cut the drug. As with all injected illicit substances, there is a risk of the user contracting blood-borne infections if sterile injecting equipment is not available or used. An injected mixture of cocaine and heroin, known as "speedball", is a particularly dangerous combination, as the converse effects of the drugs actually complement each other, but may also mask the symptoms of an overdose. It has been responsible for numerous deaths, including celebrities such as comedians/actors John Belushi and Chris Farley, Mitch Hedberg, River Phoenix, grunge singer Layne Staley and actor Philip Seymour Hoffman. Experimentally, cocaine injections can be delivered to animals such as fruit flies to study the mechanisms of cocaine addiction. Inhalation The onset of cocaine's euphoric effects is fastest with inhalation, beginning after 3–5 seconds. This gives the briefest euphoria (5–15 minutes). Cocaine is smoked by inhaling the vapor produced when crack cocaine is heated to the point of sublimation. In a 2000 Brookhaven National Laboratory medical department study, based on self-reports of 32 people who used cocaine who participated in the study, "peak high" was found at a mean of 1.4 ± 0.5 minutes. Pyrolysis products of cocaine that occur only when heated/smoked have been shown to change the effect profile, i.e. anhydroecgonine methyl ester, when co-administered with cocaine, increases the dopamine in CPu and NAc brain regions, and has M1 — and M3 — receptor affinity. People often freebase crack with a pipe made from a small glass tube, often taken from "love roses", small glass tubes with a paper rose that are promoted as romantic gifts. These are sometimes called "stems", "horns", "blasters" and "straight shooters". A small piece of clean heavy copper or occasionally stainless steel scouring padoften called a "brillo" (actual Brillo Pads contain soap, and are not used) or "chore" (named for Chore Boy brand copper scouring pads)serves as a reduction base and flow modulator in which the "rock" can be melted and boiled to vapor. Crack is smoked by placing it at the end of the pipe; a flame held close to it produces vapor, which is then inhaled by the smoker. The effects felt almost immediately after smoking, are very intense and do not last long — usually 2 to 10 minutes. When smoked, cocaine is sometimes combined with other drugs, such as cannabis, often rolled into a joint or blunt. Effects Acute Acute exposure to cocaine has many effects on humans, including euphoria, increases in heart rate and blood pressure, and increases in cortisol secretion from the adrenal gland. In humans with acute exposure followed by continuous exposure to cocaine at a constant blood concentration, the acute tolerance to the chronotropic cardiac effects of cocaine begins after about 10 minutes, while acute tolerance to the euphoric effects of cocaine begins after about one hour. With excessive or prolonged use, the drug can cause itching, fast heart rate, and paranoid delusions or sensations of insects crawling on the skin. Intranasal cocaine and crack use are both associated with pharmacological violence. Aggressive behavior may be displayed by both addicts and casual users. Cocaine can induce psychosis characterized by paranoia, impaired reality testing, hallucinations, irritability, and physical aggression. Cocaine intoxication can cause hyperawareness, hypervigilance, and psychomotor agitation and delirium. Consumption of large doses of cocaine can cause violent outbursts, especially by those with preexisting psychosis. Crack-related violence is also systemic, relating to disputes between crack dealers and users. Acute exposure may induce cardiac arrhythmias, including atrial fibrillation, supraventricular tachycardia, ventricular tachycardia, and ventricular fibrillation. Acute exposure may also lead to angina, heart attack, and congestive heart failure. Cocaine overdose may cause seizures, abnormally high body temperature and a marked elevation of blood pressure, which can be life-threatening, abnormal heart rhythms, and death. Anxiety, paranoia, and restlessness can also occur, especially during the comedown. With excessive dosage, tremors, convulsions and increased body temperature are observed. Severe cardiac adverse events, particularly sudden cardiac death, become a serious risk at high doses due to cocaine's blocking effect on cardiac sodium channels. Incidental exposure of the eye to sublimated cocaine while smoking crack cocaine can cause serious injury to the cornea and long-term loss of visual acuity. Chronic Although it has been commonly asserted, the available evidence does not show that chronic use of cocaine is associated with broad cognitive deficits. Research is inconclusive on age-related loss of striatal dopamine transporter (DAT) sites, suggesting cocaine has neuroprotective or neurodegenerative properties for dopamine neurons. Exposure to cocaine may lead to the breakdown of the blood–brain barrier. Physical side effects from chronic smoking of cocaine include coughing up blood, bronchospasm, itching, fever, diffuse alveolar infiltrates without effusions, pulmonary and systemic eosinophilia, chest pain, lung trauma, sore throat, asthma, hoarse voice, dyspnea (shortness of breath), and an aching, flu-like syndrome. Cocaine constricts blood vessels, dilates pupils, and increases body temperature, heart rate, and blood pressure. It can also cause headaches and gastrointestinal complications such as abdominal pain and nausea. A common but untrue belief is that the smoking of cocaine chemically breaks down tooth enamel and causes tooth decay. Cocaine can cause involuntary tooth grinding, known as bruxism, which can deteriorate tooth enamel and lead to gingivitis. Additionally, stimulants like cocaine, methamphetamine, and even caffeine cause dehydration and dry mouth. Since saliva is an important mechanism in maintaining one's oral pH level, people who use cocaine over a long period of time who do not hydrate sufficiently may experience demineralization of their teeth due to the pH of the tooth surface dropping too low (below 5.5). Cocaine use also promotes the formation of blood clots. This increase in blood clot formation is attributed to cocaine-associated increases in the activity of plasminogen activator inhibitor, and an increase in the number, activation, and aggregation of platelets. Chronic intranasal usage can degrade the cartilage separating the nostrils (the septum nasi), leading eventually to its complete disappearance. Due to the absorption of the cocaine from cocaine hydrochloride, the remaining hydrochloride forms a dilute hydrochloric acid. Illicitly-sold cocaine may be contaminated with levamisole. Levamisole may accentuate cocaine's effects. Levamisole-adulterated cocaine has been associated with autoimmune disease. Cocaine use leads to an increased risk of hemorrhagic and ischemic strokes. Cocaine use also increases the risk of having a heart attack. Addiction Relatives of persons with cocaine addiction have an increased risk of cocaine addiction. Cocaine addiction occurs through ΔFosB overexpression in the nucleus accumbens, which results in altered transcriptional regulation in neurons within the nucleus accumbens. ΔFosB levels have been found to increase upon the use of cocaine. Each subsequent dose of cocaine continues to increase ΔFosB levels with no ceiling of tolerance. Elevated levels of ΔFosB leads to increases in brain-derived neurotrophic factor (BDNF) levels, which in turn increases the number of dendritic branches and spines present on neurons involved with the nucleus accumbens and prefrontal cortex areas of the brain. This change can be identified rather quickly, and may be sustained weeks after the last dose of the drug. Transgenic mice exhibiting inducible expression of ΔFosB primarily in the nucleus accumbens and dorsal striatum exhibit sensitized behavioural responses to cocaine. They self-administer cocaine at lower doses than control, but have a greater likelihood of relapse when the drug is withheld. ΔFosB increases the expression of AMPA receptor subunit GluR2 and also decreases expression of dynorphin, thereby enhancing sensitivity to reward. DNA damage is increased in the brain of rodents by administration of cocaine. During DNA repair of such damages, persistent chromatin alterations may occur such as methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in cocaine addiction. In humans, cocaine abuse may cause structural changes in brain connectivity, though it is unclear to what extent these changes are permanent. Dependence and withdrawal Cocaine dependence develops after even brief periods of regular cocaine use and produces a withdrawal state with emotional-motivational deficits upon cessation of cocaine use. During pregnancy Crack baby is a term for a child born to a mother who used crack cocaine during her pregnancy. The threat that cocaine use during pregnancy poses to the fetus is now considered exaggerated. Studies show that prenatal cocaine exposure (independent of other effects such as, for example, alcohol, tobacco, or physical environment) has no appreciable effect on childhood growth and development. In 2007, he National Institute on Drug Abuse of the United States warned about health risks while cautioning against stereotyping: There are also warnings about the threat of breastfeeding: The March of Dimes said "it is likely that cocaine will reach the baby through breast milk," and advises the following regarding cocaine use during pregnancy: Mortality Persons with regular or problematic use of cocaine have a significantly higher rate of death, and are specifically at higher risk of traumatic deaths and deaths attributable to infectious disease. Pharmacology Pharmacokinetics The extent of absorption of cocaine into the systemic circulation after nasal insufflation is similar to that after oral ingestion. The rate of absorption after nasal insufflation is limited by cocaine-induced vasoconstriction of capillaries in the nasal mucosa. Onset of absorption after oral ingestion is delayed because cocaine is a weak base with a pKa of 8.6, and is thus in an ionized form that is poorly absorbed from the acidic stomach and easily absorbed from the alkaline duodenum. The rate and extent of absorption from inhalation of cocaine is similar or greater than with intravenous injection, as inhalation provides access directly to the pulmonary capillary bed. The delay in absorption after oral ingestion may account for the popular belief that cocaine bioavailability from the stomach is lower than after insufflation. Compared with ingestion, the faster absorption of insufflated cocaine results in quicker attainment of maximum drug effects. Snorting cocaine produces maximum physiological effects within 40 minutes and maximum psychotropic effects within 20 minutes. Physiological and psychotropic effects from nasally insufflated cocaine are sustained for approximately 40–60 minutes after the peak effects are attained. Cocaine crosses the blood–brain barrier via both a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. As of September 2022, the gene or genes encoding the human proton-organic cation antiporter had not been identified. Cocaine has a short elimination half life of 0.7–1.5 hours and is extensively metabolized by plasma esterases and also by liver cholinesterases, with only about 1% excreted unchanged in the urine. The metabolism is dominated by hydrolytic ester cleavage, so the eliminated metabolites consist mostly of benzoylecgonine (BE), the major metabolite, and other metabolites in lesser amounts such as ecgonine methyl ester (EME) and ecgonine. Further minor metabolites of cocaine include norcocaine, p-hydroxycocaine, m-hydroxycocaine, p-hydroxybenzoylecgonine (), and m-hydroxybenzoylecgonine. If consumed with alcohol, cocaine combines with alcohol in the liver to form cocaethylene. Studies have suggested cocaethylene is more euphoric, and has a higher cardiovascular toxicity than cocaine by itself. Depending on liver and kidney functions, cocaine metabolites are detectable in urine between three and eight days. Generally speaking benzoylecgonine is eliminated from someone's urine between three and five days. In urine from heavy cocaine users, benzoylecgonine can be detected within four hours after intake and in concentrations greater than 150 ng/mL for up to eight days later. Detection of cocaine metabolites in hair is possible in regular users until after the sections of hair grown during the period of cocaine use are cut or fall out. Pharmacodynamics The pharmacodynamics of cocaine involve the complex relationships of neurotransmitters (inhibiting monoamine uptake in rats with ratios of about: serotonin:dopamine = 2:3, serotonin:norepinephrine = 2:5). The most extensively studied effect of cocaine on the central nervous system is the blockade of the dopamine transporter protein. Dopamine neurotransmitter released during neural signaling is normally recycled via the transporter; i.e., the transporter binds the transmitter and pumps it out of the synaptic cleft back into the presynaptic neuron, where it is taken up into storage vesicles. Cocaine binds tightly at the dopamine transporter forming a complex that blocks the transporter's function. The dopamine transporter can no longer perform its reuptake function, and thus dopamine accumulates in the synaptic cleft. The increased concentration of dopamine in the synapse activates post-synaptic dopamine receptors, which makes the drug rewarding and promotes the compulsive use of cocaine. Cocaine affects certain serotonin (5-HT) receptors; in particular, it has been shown to antagonize the 5-HT3 receptor, which is a ligand-gated ion channel. An overabundance of 5-HT3 receptors is reported in cocaine-conditioned rats, though 5-HT3's role is unclear. The 5-HT2 receptor (particularly the subtypes 5-HT2A, 5-HT2B and 5-HT2C) are involved in the locomotor-activating effects of cocaine. Cocaine has been demonstrated to bind as to directly stabilize the DAT transporter on the open outward-facing conformation. Further, cocaine binds in such a way as to inhibit a hydrogen bond innate to DAT. Cocaine's binding properties are such that it attaches so this hydrogen bond will not form and is blocked from formation due to the tightly locked orientation of the cocaine molecule. Research studies have suggested that the affinity for the transporter is not what is involved in the habituation of the substance so much as the conformation and binding properties to where and how on the transporter the molecule binds. Conflicting findings have challenged the widely accepted view that cocaine functions solely as a reuptake inhibitor. To induce euphoria an intravenous dose of 0.3-0.6 mg/kg of cocaine is required, which blocks 66-70% of dopamine transporters (DAT) in the brain. Re-administering cocaine beyond this threshold does not significantly increase DAT occupancy but still results in an increase of euphoria which cannot be explained by reuptake inhibition alone. This discrepancy is not shared with other dopamine reuptake inhbitors like bupropion, sibutramine, mazindol or tesofensine, which have similar or higher potencies than cocaine as dopamine reuptake inhibitors. These findings have evoked a hypothesis that cocaine may also function as a so-called "DAT inverse agonist" or "negative allosteric modifier of DAT" resulting in dopamine transporter reversal, and subsequent dopamine release into the synaptic cleft from the axon terminal in a manner similar to but distinct from amphetamines. Sigma receptors are affected by cocaine, as cocaine functions as a sigma ligand agonist. Further specific receptors it has been demonstrated to function on are NMDA and the D1 dopamine receptor. Cocaine also blocks sodium channels, thereby interfering with the propagation of action potentials; thus, like lignocaine and novocaine, it acts as a local anesthetic. It also functions on the binding sites to the dopamine and serotonin sodium dependent transport area as targets as separate mechanisms from its reuptake of those transporters; unique to its local anesthetic value which makes it in a class of functionality different from both its own derived phenyltropanes analogues which have that removed. In addition to this, cocaine has some target binding to the site of the κ-opioid receptor. Cocaine also causes vasoconstriction, thus reducing bleeding during minor surgical procedures. Recent research points to an important role of circadian mechanisms and clock genes in behavioral actions of cocaine. Cocaine is known to suppress hunger and appetite by increasing co-localization of sigma σ1R receptors and ghrelin GHS-R1a receptors at the neuronal cell surface, thereby increasing ghrelin-mediated signaling of satiety and possibly via other effects on appetitive hormones. Chronic users may lose their appetite and can experience severe malnutrition and significant weight loss. Cocaine effects, further, are shown to be potentiated for the user when used in conjunction with new surroundings and stimuli, and otherwise novel environs. Chemistry Appearance Cocaine in its purest form is a white, pearly product. Cocaine appearing in powder form is a salt, typically cocaine hydrochloride. Street cocaine is often adulterated or "cut" with cheaper substances to increase bulk, including talc, lactose, sucrose, glucose, mannitol, inositol, caffeine, procaine, phencyclidine, phenytoin, lignocaine, strychnine, levamisole, and amphetamine. Fentanyl has been increasingly found in cocaine samples, although it is unclear if this is primarily due to intentional adulteration or cross contamination. Crack cocaine looks like irregular shaped white rocks. Forms Salts Cocaine — a tropane alkaloid — is a weakly alkaline compound, and can therefore combine with acidic compounds to form salts. The hydrochloride (HCl) salt of cocaine is by far the most commonly encountered, although the sulfate (SO42−) and the nitrate (NO3−) salts are occasionally seen. Different salts dissolve to a greater or lesser extent in various solvents — the hydrochloride salt is polar in character and is quite soluble in water. Base As the name implies, "freebase" is the base form of cocaine, as opposed to the salt form. It is practically insoluble in water whereas hydrochloride salt is water-soluble. Smoking freebase cocaine has the additional effect of releasing methylecgonidine into the user's system due to the pyrolysis of the substance (a side effect which insufflating or injecting powder cocaine does not create). Some research suggests that smoking freebase cocaine can be even more cardiotoxic than other routes of administration because of methylecgonidine's effects on lung tissue and liver tissue. Pure cocaine is prepared by neutralizing its compounding salt with an alkaline solution, which will precipitate non-polar basic cocaine. It is further refined through aqueous-solvent liquid–liquid extraction. Crack cocaine Crack is usually smoked in a glass pipe, and once inhaled, it passes from the lungs directly to the central nervous system, producing an almost immediate "high" that can be very powerful – this initial crescendo of stimulation is known as a "rush". This is followed by an equally intense low, leaving the user craving more of the drug. Addiction to crack usually occurs within four to six weeks - much more rapidly than regular cocaine. Powder cocaine (cocaine hydrochloride) must be heated to a high temperature (about 197 °C), and considerable decomposition/burning occurs at these high temperatures. This effectively destroys some of the cocaine and yields a sharp, acrid, and foul-tasting smoke. Cocaine base/crack can be smoked because it vaporizes with little or no decomposition at , which is below the boiling point of water. Crack is a lower purity form of free-base cocaine that is usually produced by neutralization of cocaine hydrochloride with a solution of baking soda (sodium bicarbonate, NaHCO3) and water, producing a very hard/brittle, off-white-to-brown colored, amorphous material that contains sodium carbonate, entrapped water, and other by-products as the main impurities. The origin of the name "crack" comes from the "crackling" sound (and hence the onomatopoeic moniker "crack") that is produced when the cocaine and its impurities (i.e. water, sodium bicarbonate) are heated past the point of vaporization. Coca leaf infusions Coca herbal infusion (also referred to as coca tea) is used in coca-leaf producing countries much as any herbal medicinal infusion would elsewhere in the world. The free and legal commercialization of dried coca leaves under the form of filtration bags to be used as "coca tea" has been actively promoted by the governments of Peru and Bolivia for many years as a drink having medicinal powers. In Peru, the National Coca Company, a state-run corporation, sells cocaine-infused teas and other medicinal products and also exports leaves to the U.S. for medicinal use. Visitors to the city of Cuzco in Peru, and La Paz in Bolivia are greeted with the offering of coca leaf infusions (prepared in teapots with whole coca leaves) purportedly to help the newly arrived traveler overcome the malaise of high altitude sickness. The effects of drinking coca tea are mild stimulation and mood lift. It has also been promoted as an adjuvant for the treatment of cocaine dependence. One study on coca leaf infusion used with counseling in the treatment of 23 addicted coca-paste smokers in Lima, Peru found that the relapses rate fell from 4.35 times per month on average before coca tea treatment to one during treatment. The duration of abstinence increased from an average of 32 days before treatment to 217.2 days during treatment. This suggests that coca leaf infusion plus counseling may be effective at preventing relapse during cocaine addiction treatment. There is little information on the pharmacological and toxicological effects of consuming coca tea. A chemical analysis by solid-phase extraction and gas chromatography–mass spectrometry (SPE-GC/MS) of Peruvian and Bolivian tea bags indicated the presence of significant amounts of cocaine, the metabolite benzoylecgonine, ecgonine methyl ester and trans-cinnamoylcocaine in coca tea bags and coca tea. Urine specimens were also analyzed from an individual who consumed one cup of coca tea and it was determined that enough cocaine and cocaine-related metabolites were present to produce a positive drug test. Synthesis Biosynthesis The first synthesis and elucidation of the cocaine molecule was by Richard Willstätter in 1898. Willstätter's synthesis derived cocaine from tropinone. Since then, Robert Robinson and Edward Leete have made significant contributions to the mechanism of the synthesis. (-NO3) The additional carbon atoms required for the synthesis of cocaine are derived from acetyl-CoA, by addition of two acetyl-CoA units to the N-methyl-Δ1-pyrrolinium cation. The first addition is a Mannich-like reaction with the enolate anion from acetyl-CoA acting as a nucleophile toward the pyrrolinium cation. The second addition occurs through a Claisen condensation. This produces a racemic mixture of the 2-substituted pyrrolidine, with the retention of the thioester from the Claisen condensation. In formation of tropinone from racemic ethyl [2,3-13C2]4(Nmethyl-2-pyrrolidinyl)-3-oxobutanoate there is no preference for either stereoisomer. In cocaine biosynthesis, only the (S)-enantiomer can cyclize to form the tropane ring system of cocaine. The stereoselectivity of this reaction was further investigated through study of prochiral methylene hydrogen discrimination. This is due to the extra chiral center at C-2. This process occurs through an oxidation, which regenerates the pyrrolinium cation and formation of an enolate anion, and an intramolecular Mannich reaction. The tropane ring system undergoes hydrolysis, SAM-dependent methylation, and reduction via NADPH for the formation of methylecgonine. The benzoyl moiety required for the formation of the cocaine diester is synthesized from phenylalanine via cinnamic acid. Benzoyl-CoA then combines the two units to form cocaine. N-methyl-pyrrolinium cation The biosynthesis begins with L-Glutamine, which is derived to L-ornithine in plants. The major contribution of L-ornithine and L-arginine as a precursor to the tropane ring was confirmed by Edward Leete. Ornithine then undergoes a pyridoxal phosphate-dependent decarboxylation to form putrescine. In some animals, the urea cycle derives putrescine from ornithine. L-ornithine is converted to L-arginine, which is then decarboxylated via PLP to form agmatine. Hydrolysis of the imine derives N-carbamoylputrescine followed with hydrolysis of the urea to form putrescine. The separate pathways of converting ornithine to putrescine in plants and animals have converged. A SAM-dependent N-methylation of putrescine gives the N-methylputrescine product, which then undergoes oxidative deamination by the action of diamine oxidase to yield the aminoaldehyde. Schiff base formation confirms the biosynthesis of the N-methyl-Δ1-pyrrolinium cation. Robert Robinson's acetonedicarboxylate The biosynthesis of the tropane alkaloid is still not understood. Hemscheidt proposes that Robinson's acetonedicarboxylate emerges as a potential intermediate for this reaction. Condensation of N-methylpyrrolinium and acetonedicarboxylate would generate the oxobutyrate. Decarboxylation leads to tropane alkaloid formation. Reduction of tropinone The reduction of tropinone is mediated by NADPH-dependent reductase enzymes, which have been characterized in multiple plant species. These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine. Illegal clandestine chemistry In 1991, the United States Department of Justice released a report detailing the typical process in which leaves from coca plants were ultimately converted into cocaine hydrochloride by Latin American drug cartels: the exact species of coca to be planted was determined by the location of its cultivation, with Erythroxylum coca being grown in tropical high altitude climates of the eastern Andes in Peru and Bolivia, while Erythroxylum novogranatense was favoured in drier lowland areas of Colombia the average cocaine alkaloid content of a sample of coca leaf varied between 0.1 and 0.8 percent, with coca from higher altitudes containing the largest percentages of cocaine alkaloids the typical farmer will plant coca on a sloping hill so rainfall will not drown the plants as they reach full maturity over 12 to 24 months after being planted the main harvest of coca leaves takes place after the traditional wet season in March, with additional harvesting also taking place in July and November the leaves are then taken to a flat area and spread out on tarpaulins to dry in the hot sun for approximately 6 hours, and afterwards placed in sacks to be transported to market or to a cocaine processing facility depending on location in the early 1990s, Peru and Bolivia were the main locations for converting coca leaf to coca paste and cocaine base, while Colombia was the primary location for the final conversion for these products into cocaine hydrochloride the conversion of coca leaf into coca paste was typically done very close to the coca fields to minimize the need to transport the coca leaves, with a plastic lined pit in the ground used as a "pozo" the leaves are added to the pozo along with fresh water from a nearby river, along with kerosene and sodium carbonate, then a team of several people will repeatedly stomp on the mixture in their bare feet for several hours to help turn the leaves into paste the cocaine alkaloids and kerosene eventually separate from the water and coca leaves, which are then drained off / scooped out of the mixture the cocaine alkaloids are then extracted from the kerosene and added into a dilute acidic solution, to which more sodium carbonate is added to cause a precipitate to form the acid and water are afterwards drained off and the precipitate is filtered and dried to produce an off-white putty-like substance, which is coca paste ready for transportation to cocaine base processing facility at the processing facility, coca paste is dissolved in a mixture of sulfuric acid and water, to which potassium permanganate is then added and the solution is left to stand for 6 hours to allow to unwanted alkaloids to break down the solution is then filtered and the precipitate is discarded, after which ammonia water is added and another precipitate is formed when the solution has finished reacting the liquid is drained, then the remaining precipitate is dried under heating lamps, and resulting powder is cocaine base ready for transfer to a cocaine hydrochloride laboratory at the laboratory, acetone is added to the cocaine base and after it has dissolved the solution is filtered to remove undesired material hydrochloric acid diluted in ether is added to the solution, which causes the cocaine to precipitate out of the solution as cocaine hydrochloride crystals the cocaine hydrochloride crystals are finally dried under lamps or in microwave ovens, then pressed into blocks and wrapped in plastic ready for export GMO synthesis Research In 2022, a GMO produced N. benthamiana were discovered that were able to produce 25% of the amount of cocaine found in a coca plant. Detection in body fluids Cocaine and its major metabolites may be quantified in blood, plasma, or urine to monitor for use, confirm a diagnosis of poisoning, or assist in the forensic investigation of a traffic or other criminal violation or sudden death. Most commercial cocaine immunoassay screening tests cross-react appreciably with the major cocaine metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. When interpreting the results of a test, it is important to consider the cocaine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate a cocaine-naive individual, and the chronic user often has high baseline values of the metabolites in his system. Cautious interpretation of testing results may allow a distinction between passive or active usage, and between smoking versus other routes of administration. Field analysis Cocaine may be detected by law enforcement using the Scott reagent. The test can easily generate false positives for common substances and must be confirmed with a laboratory test. Approximate cocaine purity can be determined using 1 mL 2% cupric sulfate pentahydrate in dilute HCl, 1 mL 2% potassium thiocyanate and 2 mL of chloroform. The shade of brown shown by the chloroform is proportional to the cocaine content. This test is not cross sensitive to heroin, methamphetamine, benzocaine, procaine and a number of other drugs but other chemicals could cause false positives. Usage According to a 2016 United Nations report, England and Wales are the countries with the highest rate of cocaine usage (2.4% of adults in the previous year). Other countries where the usage rate meets or exceeds 1.5% are Spain and Scotland (2.2%), the United States (2.1%), Australia (2.1%), Uruguay (1.8%), Brazil (1.75%), Chile (1.73%), the Netherlands (1.5%) and Ireland (1.5%). Europe Cocaine is the second most popular illegal recreational drug in Europe (behind cannabis). Since the mid-1990s, overall cocaine usage in Europe has been on the rise, but usage rates and attitudes tend to vary between countries. European countries with the highest usage rates are the United Kingdom, Spain, Italy, and the Republic of Ireland. Approximately 17 million Europeans (5.1%) have used cocaine at least once and 3.5 million (1.1%) in the last year. About 1.9% (2.3 million) of young adults (15–34 years old) have used cocaine in the last year (latest data available as of 2018). Usage is particularly prevalent among this demographic: 4% to 7% of males have used cocaine in the last year in Spain, Denmark, the Republic of Ireland, Italy, and the United Kingdom. The ratio of male to female users is approximately 3.8:1, but this statistic varies from 1:1 to 13:1 depending on country. In 2014 London had the highest amount of cocaine in its sewage out of 50 European cities. United States Cocaine is the second most popular illegal recreational drug in the United States (behind cannabis) and the U.S. is the world's largest consumer of cocaine. Its users span over different ages, races, and professions. In the 1970s and 1980s, the drug became particularly popular in the disco culture as cocaine usage was very common and popular in many discos such as Studio 54. Dependence treatment History Discovery Indigenous peoples of South America have chewed the leaves of Erythroxylon coca—a plant that contains vital nutrients as well as numerous alkaloids, including cocaine—for over a thousand years. The coca leaf was, and still is, chewed almost universally by some indigenous communities. The remains of coca leaves have been found with ancient Peruvian mummies, and pottery from the time period depicts humans with bulged cheeks, indicating the presence of something on which they are chewing. There is also evidence that these cultures used a mixture of coca leaves and saliva as an anesthetic for the performance of trepanation. When the Spanish arrived in South America, the conquistadors at first banned coca as an "evil agent of devil". But after discovering that without the coca the locals were barely able to work, the conquistadors legalized and taxed the leaf, taking 10% off the value of each crop. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce "great contentment": In 1609, Padre Blas Valera wrote: Isolation and naming Although the stimulant and hunger-suppressant properties of coca leaves had been known for many centuries, the isolation of the cocaine alkaloid was not achieved until 1855. Various European scientists had attempted to isolate cocaine, but none had been successful for two reasons: the knowledge of chemistry required was insufficient, and conditions of sea-shipping from South America at the time would often degrade the quality of the cocaine in the plant samples available to European chemists by the time they arrived. However, by 1855, the German chemist Friedrich Gaedcke successfully isolated the cocaine alkaloid for the first time. Gaedcke named the alkaloid "erythroxyline", and published a description in the journal Archiv der Pharmazie. In 1856, Friedrich Wöhler asked Dr. Carl Scherzer, a scientist aboard the Novara (an Austrian frigate sent by Emperor Franz Joseph to circle the globe), to bring him a large amount of coca leaves from South America. In 1859, the ship finished its travels and Wöhler received a trunk full of coca. Wöhler passed on the leaves to Albert Niemann, a PhD student at the University of Göttingen in Germany, who then developed an improved purification process. Niemann described every step he took to isolate cocaine in his dissertation titled Über eine neue organische Base in den Cocablättern (On a New Organic Base in the Coca Leaves), which was published in 1860 and earned him his Ph.D. He wrote of the alkaloid's "colourless transparent prisms" and said that "Its solutions have an alkaline reaction, a bitter taste, promote the flow of saliva and leave a peculiar numbness, followed by a sense of cold when applied to the tongue." Niemann named the alkaloid "cocaine" from "coca" (from Quechua "kúka") + suffix "ine". The first synthesis and elucidation of the structure of the cocaine molecule was by Richard Willstätter in 1898. It was the first biomimetic synthesis of an organic structure recorded in academic chemical literature. The synthesis started from tropinone, a related natural product and took five steps. Because of the former use of cocaine as a local anesthetic, a suffix "-caine" was later extracted and used to form names of synthetic local anesthetics. Medicalization With the discovery of this new alkaloid, Western medicine was quick to exploit the possible uses of this plant. In 1879, Vassili von Anrep, of the University of Würzburg, devised an experiment to demonstrate the analgesic properties of the newly discovered alkaloid. He prepared two separate jars, one containing a cocaine-salt solution, with the other containing merely saltwater. He then submerged a frog's legs into the two jars, one leg in the treatment and one in the control solution, and proceeded to stimulate the legs in several different ways. The leg that had been immersed in the cocaine solution reacted very differently from the leg that had been immersed in saltwater. Karl Koller (a close associate of Sigmund Freud, who would write about cocaine later) experimented with cocaine for ophthalmic usage. In an infamous experiment in 1884, he experimented upon himself by applying a cocaine solution to his own eye and then pricking it with pins. His findings were presented to the Heidelberg Ophthalmological Society. Also in 1884, Jellinek demonstrated the effects of cocaine as a respiratory system anesthetic. In 1885, William Halsted demonstrated nerve-block anesthesia, and James Leonard Corning demonstrated peridural anesthesia. 1898 saw Heinrich Quincke use cocaine for spinal anesthesia. Popularization In 1859, an Italian doctor, Paolo Mantegazza, returned from Peru, where he had witnessed first-hand the use of coca by the local indigenous peoples. He proceeded to experiment on himself and upon his return to Milan, he wrote a paper in which he described the effects. In this paper, he declared coca and cocaine (at the time they were assumed to be the same) as being useful medicinally, in the treatment of "a furred tongue in the morning, flatulence, and whitening of the teeth." A chemist named Angelo Mariani who read Mantegazza's paper became immediately intrigued with coca and its economic potential. In 1863, Mariani started marketing a wine called Vin Mariani, which had been treated with coca leaves, to become coca wine. The ethanol in wine acted as a solvent and extracted the cocaine from the coca leaves, altering the drink's effect. It contained 6 mg cocaine per ounce of wine, but Vin Mariani which was to be exported contained 7.2 mg per ounce, to compete with the higher cocaine content of similar drinks in the United States. A "pinch of coca leaves" was included in John Styth Pemberton's original 1886 recipe for Coca-Cola, though the company began using decocainized leaves in 1906 when the Pure Food and Drug Act was passed. In 1879 cocaine began to be used to treat morphine addiction. Cocaine was introduced into clinical use as a local anesthetic in Germany in 1884, about the same time as Sigmund Freud published his work Über Coca, in which he wrote that cocaine causes: By 1885 the U.S. manufacturer Parke-Davis sold coca-leaf cigarettes and cheroots, a cocaine inhalant, a Coca Cordial, cocaine crystals, and cocaine solution for intravenous injection. The company promised that its cocaine products would "supply the place of food, make the coward brave, the silent eloquent and render the sufferer insensitive to pain." By the late Victorian era, cocaine use had appeared as a vice in literature. For example, it was injected by Arthur Conan Doyle's fictional Sherlock Holmes, generally to offset the boredom he felt when he was not working on a case. In early 20th-century Memphis, Tennessee, cocaine was sold in neighborhood drugstores on Beale Street, costing five or ten cents for a small boxful. Stevedores along the Mississippi River used the drug as a stimulant, and white employers encouraged its use by black laborers. In 1909, Ernest Shackleton took "Forced March" brand cocaine tablets to Antarctica, as did Captain Scott a year later on his ill-fated journey to the South Pole. In the 1931 song "Minnie the Moocher", Cab Calloway heavily references cocaine use. He uses the phrase "kicking the gong around", slang for cocaine use; describes titular character Minnie as "tall and skinny;" and describes Smokey Joe as "cokey". In the 1932 comedy musical film The Big Broadcast, Cab Calloway performs the song with his orchestra and mimes snorting cocaine in between verses. During the mid-1940s, amidst World War II, cocaine was considered for inclusion as an ingredient of a future generation of 'pep pills' for the German military, code named D-IX. In modern popular culture, references to cocaine are common. The drug has a glamorous image associated with the wealthy, famous and powerful, and is said to make users "feel rich and beautiful". In addition the pace of modern society − such as in finance − gives many the incentive to make use of the drug. Modern usage In many countries, cocaine is a popular recreational drug. Cocaine use is prevalent across all socioeconomic strata, including age, demographics, economic, social, political, religious, and livelihood. In the United States, the development of "crack" cocaine introduced the substance to a generally poorer inner-city market. The use of the powder form has stayed relatively constant, experiencing a new height of use across the 1980s and 1990s in the U.S. However, from 2006 to 2010 cocaine use in the US declined by roughly half before again rising once again from 2017 onwards. In the UK, cocaine use increased significantly between the 1990s and late 2000s, with a similar high consumption in some other European countries, including Spain. The estimated U.S. cocaine market exceeded US$70 billion in street value for the year 2005, exceeding revenues by corporations such as Starbucks. Cocaine's status as a club drug shows its immense popularity among the "party crowd". In 1995 the World Health Organization (WHO) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced in a press release the publication of the results of the largest global study on cocaine use ever undertaken. An American representative in the World Health Assembly banned the publication of the study, because it seemed to make a case for the positive uses of cocaine. An excerpt of the report strongly conflicted with accepted paradigms, for example, "that occasional cocaine use does not typically lead to severe or even minor physical or social problems." In the sixth meeting of the B committee, the US representative threatened that "If World Health Organization activities relating to drugs failed to reinforce proven drug control approaches, funds for the relevant programs should be curtailed". This led to the decision to discontinue publication. A part of the study was recuperated and published in 2010, including profiles of cocaine use in 20 countries, but are unavailable . In October 2010 it was reported that the use of cocaine in Australia has doubled since monitoring began in 2003. A problem with illegal cocaine use, especially in the higher volumes used to combat fatigue (rather than increase euphoria) by long-term users, is the risk of ill effects or damage caused by the compounds used in adulteration. Cutting or "stepping on" the drug is commonplace, using compounds which simulate ingestion effects, such as Novocain (procaine) producing temporary anesthesia, as many users believe a strong numbing effect is the result of strong and/or pure cocaine, ephedrine or similar stimulants that are to produce an increased heart rate. The normal adulterants for profit are inactive sugars, usually mannitol, creatine, or glucose, so introducing active adulterants gives the illusion of purity and to 'stretch' or make it so a dealer can sell more product than without the adulterants, however the purity of the cocaine is subsequently lowered. The adulterant of sugars allows the dealer to sell the product for a higher price because of the illusion of purity and allows the sale of more of the product at that higher price, enabling dealers to significantly increase revenue with little additional cost for the adulterants. A 2007 study by the European Monitoring Centre for Drugs and Drug Addiction showed that the purity levels for street purchased cocaine was often under 5% and on average under 50% pure. Society and culture Legal status The production, distribution, and sale of cocaine products is restricted (and illegal in most contexts) in most countries as regulated by the Single Convention on Narcotic Drugs, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. In the United States the manufacture, importation, possession, and distribution of cocaine are additionally regulated by the 1970 Controlled Substances Act. Some countries, such as Peru and Bolivia, permit the cultivation of coca leaf for traditional consumption by the local indigenous population, but nevertheless, prohibit the production, sale, and consumption of cocaine. The provisions as to how much a coca farmer can yield annually is protected by laws such as the Bolivian Cato accord. In addition, some parts of Europe, the United States, and Australia allow processed cocaine for medicinal uses only. Australia Cocaine is a Schedule 8 controlled drug in Australia under the Poisons Standard. It is the second most popular illicit recreational drug in Australia behind cannabis. In Western Australia under the Misuse of Drugs Act 1981 4.0g of cocaine is the amount of prohibited drugs determining a court of trial, 2.0g is the amount of cocaine required for the presumption of intention to sell or supply and 28.0g is the amount of cocaine required for purposes of drug trafficking. United States The US federal government instituted a national labeling requirement for cocaine and cocaine-containing products through the Pure Food and Drug Act of 1906. The next important federal regulation was the Harrison Narcotics Tax Act of 1914. While this act is often seen as the start of prohibition, the act itself was not actually a prohibition on cocaine, but instead set up a regulatory and licensing regime. The Harrison Act did not recognize addiction as a treatable condition and therefore the therapeutic use of cocaine, heroin, or morphine to such individuals was outlawed leading a 1915 editorial in the journal American Medicine to remark that the addict "is denied the medical care he urgently needs, open, above-board sources from which he formerly obtained his drug supply are closed to him, and he is driven to the underworld where he can get his drug, but of course, surreptitiously and in violation of the law." The Harrison Act left manufacturers of cocaine untouched so long as they met certain purity and labeling standards. Despite that cocaine was typically illegal to sell and legal outlets were rarer, the quantities of legal cocaine produced declined very little. Legal cocaine quantities did not decrease until the Jones–Miller Act of 1922 put serious restrictions on cocaine manufactures. Before the early 1900s, the primary problem caused by cocaine use was portrayed by newspapers to be addiction, not violence or crime, and the cocaine user was represented as an upper or middle class White person. In 1914, The New York Times published an article titled "Negro Cocaine 'Fiends' Are a New Southern Menace", portraying Black cocaine users as dangerous and able to withstand wounds that would normally be fatal. The Anti-Drug Abuse Act of 1986 mandated the same prison sentences for distributing 500 grams of powdered cocaine and just 5 grams of crack cocaine. In the National Survey on Drug Use and Health, white respondents reported a higher rate of powdered cocaine use, and Black respondents reported a higher rate of crack cocaine use. Interdiction In 2004, according to the United Nations, 589 tonnes of cocaine were seized globally by law enforcement authorities. Colombia seized 188 t, the United States 166 t, Europe 79 t, Peru 14 t, Bolivia 9 t, and the rest of the world 133 t. Production Colombia is as of 2019 the world's largest cocaine producer, with production more than tripling since 2013. Three-quarters of the world's annual yield of cocaine has been produced in Colombia, both from cocaine base imported from Peru (primarily the Huallaga Valley) and Bolivia and from locally grown coca. There was a 28% increase in the amount of potentially harvestable coca plants which were grown in Colombia in 1998. This, combined with crop reductions in Bolivia and Peru, made Colombia the nation with the largest area of coca under cultivation after the mid-1990s. Coca grown for traditional purposes by indigenous communities, a use which is still present and is permitted by Colombian laws, only makes up a small fragment of total coca production, most of which is used for the illegal drug trade. An interview with a coca farmer published in 2003 described a mode of production by acid-base extraction that has changed little since 1905. Roughly of leaves were harvested per hectare, six times per year. The leaves were dried for half a day, then chopped into small pieces with a string trimmer and sprinkled with a small amount of powdered cement (replacing sodium carbonate from former times). Several hundred pounds of this mixture were soaked in of gasoline for a day, then the gasoline was removed and the leaves were pressed for the remaining liquid, after which they could be discarded. Then battery acid (weak sulfuric acid) was used, one bucket per of leaves, to create a phase separation in which the cocaine free base in the gasoline was acidified and extracted into a few buckets of "murky-looking smelly liquid". Once powdered caustic soda was added to this, the cocaine precipitated and could be removed by filtration through a cloth. The resulting material, when dried, was termed pasta and sold by the farmer. The yearly harvest of leaves from a hectare produced of pasta, approximately 40–60% cocaine. Repeated recrystallization from solvents, producing pasta lavada and eventually crystalline cocaine were performed at specialized laboratories after the sale. Attempts to eradicate coca fields through the use of defoliants have devastated part of the farming economy in some coca-growing regions of Colombia, and strains appear to have been developed that are more resistant or immune to their use. Whether these strains are natural mutations or the product of human tampering is unclear. These strains have also shown to be more potent than those previously grown, increasing profits for the drug cartels responsible for the exporting of cocaine. Although production fell temporarily, coca crops rebounded in numerous smaller fields in Colombia, rather than the larger plantations. The cultivation of coca has become an attractive economic decision for many growers due to the combination of several factors, including the lack of other employment alternatives, the lower profitability of alternative crops in official crop substitution programs, the eradication-related damages to non-drug farms, the spread of new strains of the coca plant due to persistent worldwide demand. The latest estimate provided by the U.S. authorities on the annual production of cocaine in Colombia refers to 290 metric tons. As of the end of 2011, the seizure operations of Colombian cocaine carried out in different countries have totaled 351.8 metric tons of cocaine, i.e. 121.3% of Colombia's annual production according to the U.S. Department of State's estimates. Synthesis Synthesizing cocaine could eliminate the high visibility and low reliability of offshore sources and international smuggling, replacing them with clandestine domestic laboratories, as are common for illicit methamphetamine, but is rarely done. Natural cocaine remains the lowest cost and highest quality supply of cocaine. Formation of inactive stereoisomers (cocaine has four chiral centres – 1R 2R, 3S, and 5S, two of them dependent, hence eight possible stereoisomers) plus synthetic by-products limits the yield and purity. Trafficking and distribution Organized criminal gangs operating on a large scale dominate the cocaine trade. Most cocaine is grown and processed in South America, particularly in Colombia, Bolivia, Peru, and smuggled into the United States and Europe, the United States being the world's largest consumer of cocaine, where it is sold at huge markups; usually in the US at $80–120 for 1 gram, and $250–300 for 3.5 grams ( of an ounce, or an "eight ball"). Caribbean and Mexican routes The primary cocaine importation points in the United States have been in Arizona, southern California, southern Florida, and Texas. Typically, land vehicles are driven across the U.S.–Mexico border. Sixty-five percent of cocaine enters the United States through Mexico, and the vast majority of the rest enters through Florida. , the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs like cocaine into the United States and trafficking them throughout the United States. Cocaine traffickers from Colombia and Mexico have established a labyrinth of smuggling routes throughout the Caribbean, the Bahama Island chain, and South Florida. They often hire traffickers from Mexico or the Dominican Republic to transport the drug using a variety of smuggling techniques to U.S. markets. These include airdrops of in the Bahama Islands or off the coast of Puerto Rico, mid-ocean boat-to-boat transfers of , and the commercial shipment of tonnes of cocaine through the port of Miami. Chilean route Another route of cocaine traffic goes through Chile, which is primarily used for cocaine produced in Bolivia since the nearest seaports lie in northern Chile. The arid Bolivia–Chile border is easily crossed by 4×4 vehicles that then head to the seaports of Iquique and Antofagasta. While the price of cocaine is higher in Chile than in Peru and Bolivia, the final destination is usually Europe, especially Spain where drug dealing networks exist among South American immigrants. Techniques Cocaine is also carried in small, concealed, kilogram quantities across the border by couriers known as "mules" (or "mulas"), who cross a border either legally, for example, through a port or airport, or illegally elsewhere. The drugs may be strapped to the waist or legs or hidden in bags, or hidden in the body (by swallowing or placement inside an orifice), typically known as 'bodypacking. If the mule gets through without being caught, the gangs will receive most of the profits. If the mule caught, gangs may sever all links and the mule will usually stand trial for trafficking alone. In many cases, mules are often forced into the role, as result of coercion, violence, threats or extreme poverty. Bulk cargo ships are also used to smuggle cocaine to staging sites in the western Caribbean–Gulf of Mexico area. These vessels are typically 150–250-foot (50–80 m) coastal freighters that carry an average cocaine load of approximately 2.5 tonnes. Commercial fishing vessels are also used for smuggling operations. In areas with a high volume of recreational traffic, smugglers use the same types of vessels, such as go-fast boats, like those used by the local populations. Sophisticated drug subs are the latest tool drug runners are using to bring cocaine north from Colombia, it was reported on 20 March 2008. Although the vessels were once viewed as a quirky sideshow in the drug war, they are becoming faster, more seaworthy, and capable of carrying bigger loads of drugs than earlier models, according to those charged with catching them. Sales to consumers Cocaine is readily available in all major countries' metropolitan areas. According to the Summer 1998 Pulse Check, published by the U.S. Office of National Drug Control Policy, cocaine use had stabilized across the country, with a few increases reported in San Diego, Bridgeport, Miami, and Boston. In the West, cocaine usage was lower, which was thought to be due to a switch to methamphetamine among some users; methamphetamine is cheaper, three and a half times more powerful, and lasts 12–24 times longer with each dose. Nevertheless, the number of cocaine users remain high, with a large concentration among urban youth. In addition to the amounts previously mentioned, cocaine can be sold in "bill sizes": for example, $10 might purchase a "dime bag", a very small amount (0.1–0.15 g) of cocaine. These amounts and prices are very popular among young people because they are inexpensive and easily concealed on one's body. Quality and price can vary dramatically depending on supply and demand, and on geographic region. In 2008, the European Monitoring Centre for Drugs and Drug Addiction reports that the typical retail price of cocaine varied between €50 and €75 per gram in most European countries, although Cyprus, Romania, Sweden, and Turkey reported much higher values. Consumption World annual cocaine consumption, as of 2000, stood at around 600 tonnes, with the United States consuming around 300 t, 50% of the total, Europe about 150 t, 25% of the total, and the rest of the world the remaining 150 t or 25%. It is estimated that 1.5 million people in the United States used cocaine in 2010, down from 2.4 million in 2006. Conversely, cocaine use appears to be increasing in Europe with the highest prevalences in Spain, the United Kingdom, Italy, and Ireland. The 2010 UN World Drug Report concluded that "it appears that the North American cocaine market has declined in value from US$47 billion in 1998 to US$38 billion in 2008. Between 2006 and 2008, the value of the market remained basically stable".
Biology and health sciences
Biochemistry and molecular biology
null
7706
https://en.wikipedia.org/wiki/Cartesian%20coordinate%20system
Cartesian coordinate system
In geometry, a Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of real numbers called coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, called coordinate lines, coordinate axes or just axes (plural of axis) of the system. The point where the axes meet is called the origin and has as coordinates. The axes directions represent an orthogonal basis. The combination of origin and basis forms a coordinate frame called the Cartesian frame. Similarly, the position of any point in three-dimensional space can be specified by three Cartesian coordinates, which are the signed distances from the point to three mutually perpendicular planes. More generally, Cartesian coordinates specify the point in an -dimensional Euclidean space for any dimension . These coordinates are the signed distances from the point to mutually perpendicular fixed hyperplanes. Cartesian coordinates are named for René Descartes, whose invention of them in the 17th century revolutionized mathematics by allowing the expression of problems of geometry in terms of algebra and calculus. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by equations involving the coordinates of points of the shape. For example, a circle of radius 2, centered at the origin of the plane, may be described as the set of all points whose coordinates and satisfy the equation ; the area, the perimeter and the tangent line at any point can be computed from this equation by using integrals and derivatives, in a way that can be applied to any curve. Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing. History The adjective Cartesian refers to the French mathematician and philosopher René Descartes, who published this idea in 1637 while he was resident in the Netherlands. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. The French cleric Nicole Oresme used constructions similar to Cartesian coordinates well before the time of Descartes and Fermat. Both Descartes and Fermat used a single axis in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes' La Géométrie was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes's work. The development of the Cartesian coordinate system would play a fundamental role in the development of the calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, and the spherical and cylindrical coordinates for three-dimensional space. Description One dimension An affine line with a chosen Cartesian coordinate system is called a number line. Every point on the line has a real-number coordinate, and every real number represents some point on the line. There are two degrees of freedom in the choice of Cartesian coordinate system for a line, which can be specified by choosing two distinct points along the line and assigning them to two distinct real numbers (most commonly zero and one). Other points can then be uniquely assigned to numbers by linear interpolation. Equivalently, one point can be assigned to a specific real number, for instance an origin point corresponding to zero, and an oriented length along the line can be chosen as a unit, with the orientation indicating the correspondence between directions along the line and positive or negative numbers. Each point corresponds to its signed distance from the origin (a number with an absolute value equal to the distance and a or sign chosen based on direction). A geometric transformation of the line can be represented by a function of a real variable, for example translation of the line corresponds to addition, and scaling the line corresponds to multiplication. Any two Cartesian coordinate systems on the line can be related to each-other by a linear function (function of the form taking a specific point's coordinate in one system to its coordinate in the other system. Choosing a coordinate system for each of two different lines establishes an affine map from one line to the other taking each point on one line to the point on the other line with the same coordinate. Two dimensions A Cartesian coordinate system in two dimensions (also called a rectangular coordinate system or an orthogonal coordinate system) is defined by an ordered pair of perpendicular lines (axes), a single unit of length for both axes, and an orientation for each axis. The point where the axes meet is taken as the origin for both, thus turning each axis into a number line. For any point P, a line is drawn through P perpendicular to each axis, and the position where it meets the axis is interpreted as a number. The two numbers, in that chosen order, are the Cartesian coordinates of P. The reverse construction allows one to determine the point P given its coordinates. The first and second coordinates are called the abscissa and the ordinate of P, respectively; and the point where the axes meet is called the origin of the coordinate system. The coordinates are usually written as two numbers in parentheses, in that order, separated by a comma, as in . Thus the origin has coordinates , and the points on the positive half-axes, one unit away from the origin, have coordinates and . In mathematics, physics, and engineering, the first axis is usually defined or depicted as horizontal and oriented to the right, and the second axis is vertical and oriented upwards. (However, in some computer graphics contexts, the ordinate axis may be oriented downwards.) The origin is often labeled O, and the two coordinates are often denoted by the letters X and Y, or x and y. The axes may then be referred to as the X-axis and Y-axis. The choices of letters come from the original convention, which is to use the latter part of the alphabet to indicate unknown values. The first part of the alphabet was used to designate known values. A Euclidean plane with a chosen Cartesian coordinate system is called a . In a Cartesian plane, one can define canonical representatives of certain geometric figures, such as the unit circle (with radius equal to the length unit, and center at the origin), the unit square (whose diagonal has endpoints at and ), the unit hyperbola, and so on. The two axes divide the plane into four right angles, called quadrants. The quadrants may be named or numbered in various ways, but the quadrant where all coordinates are positive is usually called the first quadrant. If the coordinates of a point are , then its distances from the X-axis and from the Y-axis are and , respectively; where denotes the absolute value of a number. Three dimensions A Cartesian coordinate system for a three-dimensional space consists of an ordered triplet of lines (the axes) that go through a common point (the origin), and are pair-wise perpendicular; an orientation for each axis; and a single unit of length for all three axes. As in the two-dimensional case, each axis becomes a number line. For any point P of space, one considers a plane through P perpendicular to each coordinate axis, and interprets the point where that plane cuts the axis as a number. The Cartesian coordinates of P are those three numbers, in the chosen order. The reverse construction determines the point P given its three coordinates. Alternatively, each coordinate of a point P can be taken as the distance from P to the plane defined by the other two axes, with the sign determined by the orientation of the corresponding axis. Each pair of axes defines a coordinate plane. These planes divide space into eight octants. The octants are: The coordinates are usually written as three numbers (or algebraic formulas) surrounded by parentheses and separated by commas, as in or . Thus, the origin has coordinates , and the unit points on the three axes are , , and . Standard names for the coordinates in the three axes are abscissa, ordinate and applicate. The coordinates are often denoted by the letters x, y, and z. The axes may then be referred to as the x-axis, y-axis, and z-axis, respectively. Then the coordinate planes can be referred to as the xy-plane, yz-plane, and xz-plane. In mathematics, physics, and engineering contexts, the first two axes are often defined or depicted as horizontal, with the third axis pointing up. In that case the third coordinate may be called height or altitude. The orientation is usually chosen so that the 90-degree angle from the first axis to the second axis looks counter-clockwise when seen from the point ; a convention that is commonly called the right-hand rule. Higher dimensions Since Cartesian coordinates are unique and non-ambiguous, the points of a Cartesian plane can be identified with pairs of real numbers; that is, with the Cartesian product , where is the set of all real numbers. In the same way, the points in any Euclidean space of dimension n be identified with the tuples (lists) of n real numbers; that is, with the Cartesian product . Generalizations The concept of Cartesian coordinates generalizes to allow axes that are not perpendicular to each other, and/or different units along each axis. In that case, each coordinate is obtained by projecting the point onto one axis along a direction that is parallel to the other axis (or, in general, to the hyperplane defined by all the other axes). In such an oblique coordinate system the computations of distances and angles must be modified from that in standard Cartesian systems, and many standard formulas (such as the Pythagorean formula for the distance) do not hold (see affine plane). Notations and conventions The Cartesian coordinates of a point are usually written in parentheses and separated by commas, as in or . The origin is often labelled with the capital letter O. In analytic geometry, unknown or generic coordinates are often denoted by the letters (x, y) in the plane, and (x, y, z) in three-dimensional space. This custom comes from a convention of algebra, which uses letters near the end of the alphabet for unknown values (such as the coordinates of points in many geometric problems), and letters near the beginning for given quantities. These conventional names are often used in other domains, such as physics and engineering, although other letters may be used. For example, in a graph showing how a pressure varies with time, the graph coordinates may be denoted p and t. Each axis is usually named after the coordinate which is measured along it; so one says the x-axis, the y-axis, the t-axis, etc. Another common convention for coordinate naming is to use subscripts, as (x1, x2, ..., xn) for the n coordinates in an n-dimensional space, especially when n is greater than 3 or unspecified. Some authors prefer the numbering (x0, x1, ..., xn−1). These notations are especially advantageous in computer programming: by storing the coordinates of a point as an array, instead of a record, the subscript can serve to index the coordinates. In mathematical illustrations of two-dimensional Cartesian systems, the first coordinate (traditionally called the abscissa) is measured along a horizontal axis, oriented from left to right. The second coordinate (the ordinate) is then measured along a vertical axis, usually oriented from bottom to top. Young children learning the Cartesian system, commonly learn the order to read the values before cementing the x-, y-, and z-axis concepts, by starting with 2D mnemonics (for example, 'Walk along the hall then up the stairs' akin to straight across the x-axis then up vertically along the y-axis). Computer graphics and image processing, however, often use a coordinate system with the y-axis oriented downwards on the computer display. This convention developed in the 1960s (or earlier) from the way that images were originally stored in display buffers. For three-dimensional systems, a convention is to portray the xy-plane horizontally, with the z-axis added to represent height (positive up). Furthermore, there is a convention to orient the x-axis toward the viewer, biased either to the right or left. If a diagram (3D projection or 2D perspective drawing) shows the x- and y-axis horizontally and vertically, respectively, then the z-axis should be shown pointing "out of the page" towards the viewer or camera. In such a 2D diagram of a 3D coordinate system, the z-axis would appear as a line or ray pointing down and to the left or down and to the right, depending on the presumed viewer or camera perspective. In any diagram or display, the orientation of the three axes, as a whole, is arbitrary. However, the orientation of the axes relative to each other should always comply with the right-hand rule, unless specifically stated otherwise. All laws of physics and math assume this right-handedness, which ensures consistency. For 3D diagrams, the names "abscissa" and "ordinate" are rarely used for x and y, respectively. When they are, the z-coordinate is sometimes called the applicate. The words abscissa, ordinate and applicate are sometimes used to refer to coordinate axes rather than the coordinate values. Quadrants and octants The axes of a two-dimensional Cartesian system divide the plane into four infinite regions, called quadrants, each bounded by two half-axes. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the coordinates both have positive signs), II (where the abscissa is negative − and the ordinate is positive +), III (where both the abscissa and the ordinate are −), and IV (abscissa +, ordinate −). When the axes are drawn according to the mathematical custom, the numbering goes counter-clockwise starting from the upper right ("north-east") quadrant. Similarly, a three-dimensional Cartesian system defines a division of space into eight regions or octants, according to the signs of the coordinates of the points. The convention used for naming a specific octant is to list its signs; for example, or . The generalization of the quadrant and octant to an arbitrary number of dimensions is the orthant, and a similar naming system applies. Cartesian formulae for the plane Distance between two points The Euclidean distance between two points of the plane with Cartesian coordinates and is This is the Cartesian version of Pythagoras's theorem. In three-dimensional space, the distance between points and is which can be obtained by two consecutive applications of Pythagoras' theorem. Euclidean transformations The Euclidean transformations or Euclidean motions are the (bijective) mappings of points of the Euclidean plane to themselves which preserve distances between points. There are four types of these mappings (also called isometries): translations, rotations, reflections and glide reflections. Translation Translating a set of points of the plane, preserving the distances and directions between them, is equivalent to adding a fixed pair of numbers to the Cartesian coordinates of every point in the set. That is, if the original coordinates of a point are , after the translation they will be Rotation To rotate a figure counterclockwise around the origin by some angle is equivalent to replacing every point with coordinates (x,y) by the point with coordinates (x',y'), where Thus: Reflection If are the Cartesian coordinates of a point, then are the coordinates of its reflection across the second coordinate axis (the y-axis), as if that line were a mirror. Likewise, are the coordinates of its reflection across the first coordinate axis (the x-axis). In more generality, reflection across a line through the origin making an angle with the x-axis, is equivalent to replacing every point with coordinates by the point with coordinates , where Thus: Glide reflection A glide reflection is the composition of a reflection across a line followed by a translation in the direction of that line. It can be seen that the order of these operations does not matter (the translation can come first, followed by the reflection). General matrix form of the transformations All affine transformations of the plane can be described in a uniform way by using matrices. For this purpose, the coordinates of a point are commonly represented as the column matrix The result of applying an affine transformation to a point is given by the formula where is a 2×2 matrix and is a column matrix. That is, Among the affine transformations, the Euclidean transformations are characterized by the fact that the matrix is orthogonal; that is, its columns are orthogonal vectors of Euclidean norm one, or, explicitly, and This is equivalent to saying that times its transpose is the identity matrix. If these conditions do not hold, the formula describes a more general affine transformation. The transformation is a translation if and only if is the identity matrix. The transformation is a rotation around some point if and only if is a rotation matrix, meaning that it is orthogonal and A reflection or glide reflection is obtained when, Assuming that translations are not used (that is, ) transformations can be composed by simply multiplying the associated transformation matrices. In the general case, it is useful to use the augmented matrix of the transformation; that is, to rewrite the transformation formula where With this trick, the composition of affine transformations is obtained by multiplying the augmented matrices. Affine transformation Affine transformations of the Euclidean plane are transformations that map lines to lines, but may change distances and angles. As said in the preceding section, they can be represented with augmented matrices: The Euclidean transformations are the affine transformations such that the 2×2 matrix of the is orthogonal. The augmented matrix that represents the composition of two affine transformations is obtained by multiplying their augmented matrices. Some affine transformations that are not Euclidean transformations have received specific names. Scaling An example of an affine transformation which is not Euclidean is given by scaling. To make a figure larger or smaller is equivalent to multiplying the Cartesian coordinates of every point by the same positive number m. If are the coordinates of a point on the original figure, the corresponding point on the scaled figure has coordinates If m is greater than 1, the figure becomes larger; if m is between 0 and 1, it becomes smaller. Shearing A shearing transformation will push the top of a square sideways to form a parallelogram. Horizontal shearing is defined by: Shearing can also be applied vertically: Orientation and handedness In two dimensions Fixing or choosing the x-axis determines the y-axis up to direction. Namely, the y-axis is necessarily the perpendicular to the x-axis through the point marked 0 on the x-axis. But there is a choice of which of the two half lines on the perpendicular to designate as positive and which as negative. Each of these two choices determines a different orientation (also called handedness) of the Cartesian plane. The usual way of orienting the plane, with the positive x-axis pointing right and the positive y-axis pointing up (and the x-axis being the "first" and the y-axis the "second" axis), is considered the positive or standard orientation, also called the right-handed orientation. A commonly used mnemonic for defining the positive orientation is the right-hand rule. Placing a somewhat closed right hand on the plane with the thumb pointing up, the fingers point from the x-axis to the y-axis, in a positively oriented coordinate system. The other way of orienting the plane is following the left-hand rule, placing the left hand on the plane with the thumb pointing up. When pointing the thumb away from the origin along an axis towards positive, the curvature of the fingers indicates a positive rotation along that axis. Regardless of the rule used to orient the plane, rotating the coordinate system will preserve the orientation. Switching any one axis will reverse the orientation, but switching both will leave the orientation unchanged. In three dimensions Once the x- and y-axes are specified, they determine the line along which the z-axis should lie, but there are two possible orientations for this line. The two possible coordinate systems, which result are called 'right-handed' and 'left-handed'. The standard orientation, where the xy-plane is horizontal and the z-axis points up (and the x- and the y-axis form a positively oriented two-dimensional coordinate system in the xy-plane if observed from above the xy-plane) is called right-handed or positive. The name derives from the right-hand rule. If the index finger of the right hand is pointed forward, the middle finger bent inward at a right angle to it, and the thumb placed at a right angle to both, the three fingers indicate the relative orientation of the x-, y-, and z-axes in a right-handed system. The thumb indicates the x-axis, the index finger the y-axis and the middle finger the z-axis. Conversely, if the same is done with the left hand, a left-handed system results. Figure 7 depicts a left and a right-handed coordinate system. Because a three-dimensional object is represented on the two-dimensional screen, distortion and ambiguity result. The axis pointing downward (and to the right) is also meant to point towards the observer, whereas the "middle"-axis is meant to point away from the observer. The red circle is parallel to the horizontal xy-plane and indicates rotation from the x-axis to the y-axis (in both cases). Hence the red arrow passes in front of the z-axis. Figure 8 is another attempt at depicting a right-handed coordinate system. Again, there is an ambiguity caused by projecting the three-dimensional coordinate system into the plane. Many observers see Figure 8 as "flipping in and out" between a convex cube and a concave "corner". This corresponds to the two possible orientations of the space. Seeing the figure as convex gives a left-handed coordinate system. Thus the "correct" way to view Figure 8 is to imagine the x-axis as pointing towards the observer and thus seeing a concave corner. Representing a vector in the standard basis A point in space in a Cartesian coordinate system may also be represented by a position vector, which can be thought of as an arrow pointing from the origin of the coordinate system to the point. If the coordinates represent spatial positions (displacements), it is common to represent the vector from the origin to the point of interest as . In two dimensions, the vector from the origin to the point with Cartesian coordinates (x, y) can be written as: where and are unit vectors in the direction of the x-axis and y-axis respectively, generally referred to as the standard basis (in some application areas these may also be referred to as versors). Similarly, in three dimensions, the vector from the origin to the point with Cartesian coordinates can be written as: where and There is no natural interpretation of multiplying vectors to obtain another vector that works in all dimensions, however there is a way to use complex numbers to provide such a multiplication. In a two-dimensional cartesian plane, identify the point with coordinates with the complex number . Here, i is the imaginary unit and is identified with the point with coordinates , so it is not the unit vector in the direction of the x-axis. Since the complex numbers can be multiplied giving another complex number, this identification provides a means to "multiply" vectors. In a three-dimensional cartesian space a similar identification can be made with a subset of the quaternions.
Mathematics
Geometry
null
7713
https://en.wikipedia.org/wiki/Chinese%20remainder%20theorem
Chinese remainder theorem
In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime (no two divisors share a common factor other than 1). The theorem is sometimes called Sunzi's theorem. Both names of the theorem refer to its earliest known statement that appeared in Sunzi Suanjing, a Chinese manuscript written during the 3rd to 5th century CE. This first statement was restricted to the following example: If one knows that the remainder of n divided by 3 is 2, the remainder of n divided by 5 is 3, and the remainder of n divided by 7 is 2, then with no other information, one can determine the remainder of n divided by 105 (the product of 3, 5, and 7) without knowing the value of n. In this example, the remainder is 23. Moreover, this remainder is the only possible positive value of n that is less than 105. The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers. The Chinese remainder theorem (expressed in terms of congruences) is true over every principal ideal domain. It has been generalized to any ring, with a formulation involving two-sided ideals. History The earliest known statement of the problem appears in the 5th-century book Sunzi Suanjing by the Chinese mathematician Sunzi: Sunzi's work would not be considered a theorem by modern standards; it only gives one particular problem, without showing how to solve it, much less any proof about the general case or a general algorithm for solving it. What amounts to an algorithm for solving this problem was described by Aryabhata (6th century). Special cases of the Chinese remainder theorem were also known to Brahmagupta (7th century) and appear in Fibonacci's Liber Abaci (1202). The result was later generalized with a complete solution called Da-yan-shu () in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early 19th century by British missionary Alexander Wylie. The notion of congruences was first introduced and used by Carl Friedrich Gauss in his Disquisitiones Arithmeticae of 1801. Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, "to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction." Gauss introduces a procedure for solving the problem that had already been used by Leonhard Euler but was in fact an ancient method that had appeared several times. Statement Let n1, ..., nk be integers greater than 1, which are often called moduli or divisors. Let us denote by N the product of the ni. The Chinese remainder theorem asserts that if the ni are pairwise coprime, and if a1, ..., ak are integers such that 0 ≤ ai < ni for every i, then there is one and only one integer x, such that 0 ≤ x < N and the remainder of the Euclidean division of x by ni is ai for every i. This may be restated as follows in terms of congruences: If the are pairwise coprime, and if a1, ..., ak are any integers, then the system has a solution, and any two solutions, say x1 and x2, are congruent modulo N, that is, . In abstract algebra, the theorem is often restated as: if the ni are pairwise coprime, the map defines a ring isomorphism between the ring of integers modulo N and the direct product of the rings of integers modulo the ni. This means that for doing a sequence of arithmetic operations in one may do the same computation independently in each and then get the result by applying the isomorphism (from the right to the left). This may be much faster than the direct computation if N and the number of operations are large. This is widely used, under the name multi-modular computation, for linear algebra over the integers or the rational numbers. The theorem can also be restated in the language of combinatorics as the fact that the infinite arithmetic progressions of integers form a Helly family. Proof The existence and the uniqueness of the solution may be proven independently. However, the first proof of existence, given below, uses this uniqueness. Uniqueness Suppose that and are both solutions to all the congruences. As and give the same remainder, when divided by , their difference is a multiple of each . As the are pairwise coprime, their product also divides , and thus and are congruent modulo . If and are supposed to be non-negative and less than (as in the first statement of the theorem), then their difference may be a multiple of only if . Existence (first proof) The map maps congruence classes modulo to sequences of congruence classes modulo . The proof of uniqueness shows that this map is injective. As the domain and the codomain of this map have the same number of elements, the map is also surjective, which proves the existence of the solution. This proof is very simple but does not provide any direct way for computing a solution. Moreover, it cannot be generalized to other situations where the following proof can. Existence (constructive proof) Existence may be established by an explicit construction of . This construction may be split into two steps, first solving the problem in the case of two moduli, and then extending this solution to the general case by induction on the number of moduli. Case of two moduli We want to solve the system: where and are coprime. Bézout's identity asserts the existence of two integers and such that The integers and may be computed by the extended Euclidean algorithm. A solution is given by Indeed, implying that The second congruence is proved similarly, by exchanging the subscripts 1 and 2. General case Consider a sequence of congruence equations: where the are pairwise coprime. The two first equations have a solution provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation As the other are coprime with this reduces solving the initial problem of equations to a similar problem with equations. Iterating the process, one gets eventually the solutions of the initial problem. Existence (direct construction) For constructing a solution, it is not necessary to make an induction on the number of moduli. However, such a direct construction involves more computation with large numbers, which makes it less efficient and less used. Nevertheless, Lagrange interpolation is a special case of this construction, applied to polynomials instead of integers. Let be the product of all moduli but one. As the are pairwise coprime, and are coprime. Thus Bézout's identity applies, and there exist integers and such that A solution of the system of congruences is In fact, as is a multiple of for we have for every Computation Consider a system of congruences: where the are pairwise coprime, and let In this section several methods are described for computing the unique solution for , such that and these methods are applied on the example Several methods of computation are presented. The two first ones are useful for small examples, but become very inefficient when the product is large. The third one uses the existence proof given in . It is the most convenient when the product is large, or for computer computation. Systematic search It is easy to check whether a value of is a solution: it suffices to compute the remainder of the Euclidean division of by each . Thus, to find the solution, it suffices to check successively the integers from to until finding the solution. Although very simple, this method is very inefficient. For the simple example considered here, integers (including ) have to be checked for finding the solution, which is . This is an exponential time algorithm, as the size of the input is, up to a constant factor, the number of digits of , and the average number of operations is of the order of . Therefore, this method is rarely used, neither for hand-written computation nor on computers. Search by sieving The search of the solution may be made dramatically faster by sieving. For this method, we suppose, without loss of generality, that (if it were not the case, it would suffice to replace each by the remainder of its division by ). This implies that the solution belongs to the arithmetic progression By testing the values of these numbers modulo one eventually finds a solution of the two first congruences. Then the solution belongs to the arithmetic progression Testing the values of these numbers modulo and continuing until every modulus has been tested eventually yields the solution. This method is faster if the moduli have been ordered by decreasing value, that is if For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4, , , ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding at each step, and computing only the remainders by 3. This gives 4 mod 4 → 0. Continue 4 + 5 = 9 mod 4 →1. Continue 9 + 5 = 14 mod 4 → 2. Continue 14 + 5 = 19 mod 4 → 3. OK, continue by considering remainders modulo 3 and adding 5 × 4 = 20 each time 19 mod 3 → 1. Continue 19 + 20 = 39 mod 3 → 0. OK, this is the result. This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has an exponential time complexity and is therefore not used on computers. Using the existence construction The constructive existence proof shows that, in the case of two moduli, the solution may be obtained by the computation of the Bézout coefficients of the moduli, followed by a few multiplications, additions and reductions modulo (for getting a result in the interval ). As the Bézout's coefficients may be computed with the extended Euclidean algorithm, the whole computation, at most, has a quadratic time complexity of where denotes the number of digits of For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers. Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is, algorithms working in quasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time. On the current example (which has only three moduli), both strategies are identical and work as follows. Bézout's identity for 3 and 4 is Putting this in the formula given for proving the existence gives for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of . One may continue with any of these solutions, but the solution is smaller (in absolute value) and thus leads probably to an easier computation Bézout identity for 5 and 3 × 4 = 12 is Applying the same formula again, we get a solution of the problem: The other solutions are obtained by adding any multiple of , and the smallest positive solution is . As a linear Diophantine system The system of congruences solved by the Chinese remainder theorem may be rewritten as a system of linear Diophantine equations: where the unknown integers are and the Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of the matrix of the system to Smith normal form or Hermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use of Bézout's identity. Over principal ideal domains In , the Chinese remainder theorem has been stated in three different ways: in terms of remainders, of congruences, and of a ring isomorphism. The statement in terms of remainders does not apply, in general, to principal ideal domains, as remainders are not defined in such rings. However, the two other versions make sense over a principal ideal domain : it suffices to replace "integer" by "element of the domain" and by . These two versions of the theorem are true in this context, because the proofs (except for the first existence proof), are based on Euclid's lemma and Bézout's identity, which are true over every principal domain. However, in general, the theorem is only an existence theorem and does not provide any way for computing the solution, unless one has an algorithm for computing the coefficients of Bézout's identity. Over univariate polynomial rings and Euclidean domains The statement in terms of remainders given in cannot be generalized to any principal ideal domain, but its generalization to Euclidean domains is straightforward. The univariate polynomials over a field is the typical example of a Euclidean domain which is not the integers. Therefore, we state the theorem for the case of the ring for a field For getting the theorem for a general Euclidean domain, it suffices to replace the degree by the Euclidean function of the Euclidean domain. The Chinese remainder theorem for polynomials is thus: Let (the moduli) be, for , pairwise coprime polynomials in . Let be the degree of , and be the sum of the If are polynomials such that or for every , then, there is one and only one polynomial , such that and the remainder of the Euclidean division of by is for every . The construction of the solution may be done as in or . However, the latter construction may be simplified by using, as follows, partial fraction decomposition instead of the extended Euclidean algorithm. Thus, we want to find a polynomial , which satisfies the congruences for Consider the polynomials The partial fraction decomposition of gives polynomials with degrees such that and thus Then a solution of the simultaneous congruence system is given by the polynomial In fact, we have for This solution may have a degree larger than The unique solution of degree less than may be deduced by considering the remainder of the Euclidean division of by This solution is Lagrange interpolation A special case of Chinese remainder theorem for polynomials is Lagrange interpolation. For this, consider monic polynomials of degree one: They are pairwise coprime if the are all different. The remainder of the division by of a polynomial is , by the polynomial remainder theorem. Now, let be constants (polynomials of degree 0) in Both Lagrange interpolation and Chinese remainder theorem assert the existence of a unique polynomial of degree less than such that for every Lagrange interpolation formula is exactly the result, in this case, of the above construction of the solution. More precisely, let The partial fraction decomposition of is In fact, reducing the right-hand side to a common denominator one gets and the numerator is equal to one, as being a polynomial of degree less than which takes the value one for different values of Using the above general formula, we get the Lagrange interpolation formula: Hermite interpolation Hermite interpolation is an application of the Chinese remainder theorem for univariate polynomials, which may involve moduli of arbitrary degrees (Lagrange interpolation involves only moduli of degree one). The problem consists of finding a polynomial of the least possible degree, such that the polynomial and its first derivatives take given values at some fixed points. More precisely, let be elements of the ground field and, for let be the values of the first derivatives of the sought polynomial at (including the 0th derivative, which is the value of the polynomial itself). The problem is to find a polynomial such that its j&hairsp;th derivative takes the value at for and Consider the polynomial This is the Taylor polynomial of order at , of the unknown polynomial Therefore, we must have Conversely, any polynomial that satisfies these congruences, in particular verifies, for any therefore is its Taylor polynomial of order at , that is, solves the initial Hermite interpolation problem. The Chinese remainder theorem asserts that there exists exactly one polynomial of degree less than the sum of the which satisfies these congruences. There are several ways for computing the solution One may use the method described at the beginning of . One may also use the constructions given in or . Generalization to non-coprime moduli The Chinese remainder theorem can be generalized to non-coprime moduli. Let be any integers, let ; , and consider the system of congruences: If , then this system has a unique solution modulo . Otherwise, it has no solutions. If one uses Bézout's identity to write , then the solution is given by This defines an integer, as divides both and . Otherwise, the proof is very similar to that for coprime moduli. Generalization to arbitrary rings The Chinese remainder theorem can be generalized to any ring, by using coprime ideals (also called comaximal ideals). Two ideals and are coprime if there are elements and such that This relation plays the role of Bézout's identity in the proofs related to this generalization, which otherwise are very similar. The generalization may be stated as follows. Let be two-sided ideals of a ring and let be their intersection. If the ideals are pairwise coprime, we have the isomorphism: between the quotient ring and the direct product of the where "" denotes the image of the element in the quotient ring defined by the ideal Moreover, if is commutative, then the ideal intersection of pairwise coprime ideals is equal to their product; that is if and are coprime for all . Interpretation in terms of idempotents Let be pairwise coprime two-sided ideals with and be the isomorphism defined above. Let be the element of whose components are all except the &hairsp;th which is , and The are central idempotents that are pairwise orthogonal; this means, in particular, that and for every and . Moreover, one has and In summary, this generalized Chinese remainder theorem is the equivalence between giving pairwise coprime two-sided ideals with a zero intersection, and giving central and pairwise orthogonal idempotents that sum to . Applications Sequence numbering The Chinese remainder theorem has been used to construct a Gödel numbering for sequences, which is involved in the proof of Gödel's incompleteness theorems. Fast Fourier transform The prime-factor FFT algorithm (also called Good-Thomas algorithm) uses the Chinese remainder theorem for reducing the computation of a fast Fourier transform of size to the computation of two fast Fourier transforms of smaller sizes and (providing that and are coprime). Encryption Most implementations of RSA use the Chinese remainder theorem during signing of HTTPS certificates and during decryption. The Chinese remainder theorem can also be used in secret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered. Secret sharing using the Chinese remainder theorem uses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certain cardinality. Range ambiguity resolution The range ambiguity resolution techniques used with medium pulse repetition frequency radar can be seen as a special case of the Chinese remainder theorem. Decomposition of surjections of finite abelian groups Given a surjection of finite abelian groups, we can use the Chinese remainder theorem to give a complete description of any such map. First of all, the theorem gives isomorphisms where . In addition, for any induced map from the original surjection, we have and since for a pair of primes , the only non-zero surjections can be defined if and . These observations are pivotal for constructing the ring of profinite integers, which is given as an inverse limit of all such maps. Dedekind's theorem Dedekind's theorem on the linear independence of characters. Let be a monoid and an integral domain, viewed as a monoid by considering the multiplication on . Then any finite family of distinct monoid homomorphisms is linearly independent. In other words, every family of elements satisfying must be equal to the family . Proof. First assume that is a field, otherwise, replace the integral domain by its quotient field, and nothing will change. We can linearly extend the monoid homomorphisms to -algebra homomorphisms , where is the monoid ring of over . Then, by linearity, the condition yields Next, for the two -linear maps and are not proportional to each other. Otherwise and would also be proportional, and thus equal since as monoid homomorphisms they satisfy: , which contradicts the assumption that they are distinct. Therefore, the kernels and are distinct. Since is a field, is a maximal ideal of for every in . Because they are distinct and maximal the ideals and are coprime whenever . The Chinese Remainder Theorem (for general rings) yields an isomorphism: where Consequently, the map is surjective. Under the isomorphisms the map corresponds to: Now, yields for every vector in the image of the map . Since is surjective, this means that for every vector Consequently, . QED.
Mathematics
Modular arithmetic
null