id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,550,677
https://en.wikipedia.org/wiki/Cylinder%20stress
In mechanics, a cylinder stress is a stress distribution with rotational symmetry; that is, which remains unchanged if the stressed object is rotated about some fixed axis. Cylinder stress patterns include: circumferential stress, or hoop stress, a normal stress in the tangential (azimuth) direction. axial stress, a normal stress parallel to the axis of cylindrical symmetry. radial stress, a normal stress in directions coplanar with but perpendicular to the symmetry axis. These three principal stresses- hoop, longitudinal, and radial can be calculated analytically using a mutually perpendicular tri-axial stress system. The classical example (and namesake) of hoop stress is the tension applied to the iron bands, or hoops, of a wooden barrel. In a straight, closed pipe, any force applied to the cylindrical pipe wall by a pressure differential will ultimately give rise to hoop stresses. Similarly, if this pipe has flat end caps, any force applied to them by static pressure will induce a perpendicular axial stress on the same pipe wall. Thin sections often have negligibly small radial stress, but accurate models of thicker-walled cylindrical shells require such stresses to be considered. In thick-walled pressure vessels, construction techniques allowing for favorable initial stress patterns can be utilized. These compressive stresses at the inner surface reduce the overall hoop stress in pressurized cylinders. Cylindrical vessels of this nature are generally constructed from concentric cylinders shrunk over (or expanded into) one another, i.e., built-up shrink-fit cylinders, but can also be performed to singular cylinders though autofrettage of thick cylinders. Definitions Hoop stress The hoop stress is the force over area exerted circumferentially (perpendicular to the axis and the radius of the object) in both directions on every particle in the cylinder wall. It can be described as: where: F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides: t is the radial thickness of the cylinder l is the axial length of the cylinder. An alternative to hoop stress in describing circumferential stress is wall stress or wall tension (T), which usually is defined as the total circumferential force exerted along the entire radial thickness: Along with axial stress and radial stress, circumferential stress is a component of the stress tensor in cylindrical coordinates. It is usually useful to decompose any force applied to an object with rotational symmetry into components parallel to the cylindrical coordinates r, z, and θ. These components of force induce corresponding stresses: radial stress, axial stress, and hoop stress, respectively. Relation to internal pressure Thin-walled assumption For the thin-walled assumption to be valid, the vessel must have a wall thickness of no more than about one-tenth (often cited as Diameter / t > 20) of its radius. This allows for treating the wall as a surface, and subsequently using the Young–Laplace equation for estimating the hoop stress created by an internal pressure on a thin-walled cylindrical pressure vessel: (for a cylinder) (for a sphere) where P is the internal pressure t is the wall thickness r is the mean radius of the cylinder is the hoop stress. The hoop stress equation for thin shells is also approximately valid for spherical vessels, including plant cells and bacteria in which the internal turgor pressure may reach several atmospheres. In practical engineering applications for cylinders (pipes and tubes), hoop stress is often re-arranged for pressure, and is called Barlow's formula. Inch-pound-second system (IPS) units for P are pounds-force per square inch (psi). Units for t, and d are inches (in). SI units for P are pascals (Pa), while t and d=2r are in meters (m). When the vessel has closed ends, the internal pressure acts on them to develop a force along the axis of the cylinder. This is known as the axial stress and is usually less than the hoop stress. Though this may be approximated to There is also a radial stress that is developed perpendicular to the surface and may be estimated in thin walled cylinders as: In the thin-walled assumption the ratio is large, so in most cases this component is considered negligible compared to the hoop and axial stresses. Thick-walled vessels When the cylinder to be studied has a ratio of less than 10 (often cited as ) the thin-walled cylinder equations no longer hold since stresses vary significantly between inside and outside surfaces and shear stress through the cross section can no longer be neglected. These stresses and strains can be calculated using the Lamé equations, a set of equations developed by French mathematician Gabriel Lamé. where: and are constants of integration, which may be found from the boundary conditions, is the radius at the point of interest (e.g., at the inside or outside walls). For cylinder with boundary conditions: (i.e. internal pressure at inner surface), (i.e. external pressure at outer surface), the following constants are obtained: , . Using these constants, the following equation for radial stress and hoop stress are obtained, respectively: , . Note that when the results of these stresses are positive, it indicates tension, and negative values, compression. For a solid cylinder: then and a solid cylinder cannot have an internal pressure so . Being that for thick-walled cylinders, the ratio is less than 10, the radial stress, in proportion to the other stresses, becomes non-negligible (i.e. P is no longer much, much less than Pr/t and Pr/2t), and so the thickness of the wall becomes a major consideration for design (Harvey, 1974, pp. 57). In pressure vessel theory, any given element of the wall is evaluated in a tri-axial stress system, with the three principal stresses being hoop, longitudinal, and radial. Therefore, by definition, there exist no shear stresses on the transverse, tangential, or radial planes. In thick-walled cylinders, the maximum shear stress at any point is given by half of the algebraic difference between the maximum and minimum stresses, which is, therefore, equal to half the difference between the hoop and radial stresses. The shearing stress reaches a maximum at the inner surface, which is significant because it serves as a criterion for failure since it correlates well with actual rupture tests of thick cylinders (Harvey, 1974, p. 57). Practical effects Engineering Fracture is governed by the hoop stress in the absence of other external loads since it is the largest principal stress. Note that a hoop experiences the greatest stress at its inside (the outside and inside experience the same total strain, which is distributed over different circumferences); hence cracks in pipes should theoretically start from inside the pipe. This is why pipe inspections after earthquakes usually involve sending a camera inside a pipe to inspect for cracks. Yielding is governed by an equivalent stress that includes hoop stress and the longitudinal or radial stress when absent. Medicine In the pathology of vascular or gastrointestinal walls, the wall tension represents the muscular tension on the wall of the vessel. As a result of the Law of Laplace, if an aneurysm forms in a blood vessel wall, the radius of the vessel has increased. This means that the inward force on the vessel decreases, and therefore the aneurysm will continue to expand until it ruptures. A similar logic applies to the formation of diverticuli in the gut. Theory development The first theoretical analysis of the stress in cylinders was developed by the mid-19th century engineer William Fairbairn, assisted by his mathematical analyst Eaton Hodgkinson. Their first interest was in studying the design and failures of steam boilers. Fairbairn realized that the hoop stress was twice the longitudinal stress, an important factor in the assembly of boiler shells from rolled sheets joined by riveting. Later work was applied to bridge-building and the invention of the box girder. In the Chepstow Railway Bridge, the cast iron pillars are strengthened by external bands of wrought iron. The vertical, longitudinal force is a compressive force, which cast iron is well able to resist. The hoop stress is tensile, and so wrought iron, a material with better tensile strength than cast iron, is added. See also Can be caused by cylinder stress: Boston Molasses Disaster Boiler explosion Boiling liquid expanding vapor explosion Related engineering topics: Stress concentration Hydrostatic test Buckling Blood pressure#Relation_to_wall_tension Piping#Stress_analysis Designs very affected by this stress: Pressure vessel Rocket engine Flywheel The dome of Florence Cathedral References Mechanics
Cylinder stress
[ "Physics", "Engineering" ]
1,774
[ "Mechanics", "Mechanical engineering" ]
1,551,135
https://en.wikipedia.org/wiki/Absorption%20band
In spectroscopy, an absorption band is a range of wavelengths, frequencies or energies in the electromagnetic spectrum that are characteristic of a particular transition from initial to final state in a substance. According to quantum mechanics, atoms and molecules can only hold certain defined quantities of energy, or exist in specific states. When such quanta of electromagnetic radiation are emitted or absorbed by an atom or molecule, energy of the radiation changes the state of the atom or molecule from an initial state to a final state. Overview When electromagnetic radiation is absorbed by an atom or molecule, the energy of the radiation changes the state of the atom or molecule from an initial state to a final state. The number of states in a specific energy range is discrete for gaseous or diluted systems, with discrete energy levels. Condensed systems, like liquids or solids, have a continuous density of states distribution and often possess continuous energy bands. In order for a substance to change its energy it must do so in a series of "steps" by the absorption of a photon. This absorption process can move a particle, like an electron, from an occupied state to an empty or unoccupied state. It can also move a whole vibrating or rotating system, like a molecule, from one vibrational or rotational state to another or it can create a quasiparticle like a phonon or a plasmon in a solid. Electromagnetic transitions When a photon is absorbed, the electromagnetic field of the photon disappears as it initiates a change in the state of the system that absorbs the photon. Energy, momentum, angular momentum, magnetic dipole moment and electric dipole moment are transported from the photon to the system. Because there are conservation laws, that have to be satisfied, the transition has to meet a series of constraints. This results in a series of selection rules. It is not possible to make any transition that lies within the energy or frequency range that is observed. The strength of an electromagnetic absorption process is mainly determined by two factors. First, transitions that only change the magnetic dipole moment of the system are much weaker than transitions that change the electric dipole moment and that transitions to higher order moments, like quadrupole transitions, are weaker than dipole transitions. Second, not all transitions have the same transition matrix element, absorption coefficient or oscillator strength. For some types of bands or spectroscopic disciplines temperature and statistical mechanics plays an important role. For (far) infrared, microwave and radio frequency ranges the temperature dependent occupation numbers of states and the difference between Bose-Einstein statistics and Fermi-Dirac statistics determines the intensity of observed absorptions. For other energy ranges thermal motion effects, like Doppler broadening may determine the linewidth. Band and line shape A wide variety of absorption band and line shapes exist, and the analysis of the band or line shape can be used to determine information about the system that causes it. In many cases it is convenient to assume that a narrow spectral line is a Lorentzian or Gaussian, depending respectively on the decay mechanism or temperature effects like Doppler broadening. Analysis of the spectral density and the intensities, width and shape of spectral lines sometimes can yield a lot of information about the observed system like it is done with Mössbauer spectra. In systems with a very large number of states like macromolecules and large conjugated systems the separate energy levels can't always be distinguished in an absorption spectrum. If the line broadening mechanism is known and the shape of then spectral density is clearly visible in the spectrum, it is possible to get the desired data. Sometimes it is enough to know the lower or upper limits of the band or its position for an analysis. For condensed matter and solids the shape of absorption bands are often determined by transitions between states in their continuous density of states distributions. For crystals, the electronic band structure determines the density of states. In fluids, glasses and amorphous solids, there is no long range correlation and the dispersion relations are isotropic. For charge-transfer complexes and conjugated systems, the band width is complicated by a variety of factors, compared to condensed matter. Types Electronic transitions Electromagnetic transitions in atoms, molecules and condensed matter mainly take place at energies corresponding to the UV and visible part of the spectrum. Core electrons in atoms, and many other phenomena, are observed with different brands of XAS in the X-ray energy range. Electromagnetic transitions in atomic nuclei, as observed in Mössbauer spectroscopy, take place in the gamma ray part of the spectrum. The main factors that cause broadening of the spectral line into an absorption band of a molecular solid are the distributions of vibrational and rotational energies of the molecules in the sample (and also those of their excited states). In solid crystals the shape of absorption bands are determined by the density of states of initial and final states of electronic states or lattice vibrations, called phonons, in the crystal structure. In gas phase spectroscopy, the fine structure afforded by these factors can be discerned, but in solution-state spectroscopy, the differences in molecular micro environments further broaden the structure to give smooth bands. Electronic transition bands of molecules may be from tens to several hundred nanometers in breadth. Vibrational transitions Vibrational transitions and optical phonon transitions take place in the infrared part of the spectrum, at wavelengths of around 1-30 micrometres. Rotational transitions Rotational transitions take place in the far infrared and microwave regions. Other transitions Absorption bands in the radio frequency range are found in NMR spectroscopy. The frequency ranges and intensities are determined by the magnetic moment of the nuclei that are observed, the applied magnetic field and temperature occupation number differences of the magnetic states. Applications Materials with broad absorption bands are being applied in pigments, dyes and optical filters. Titanium dioxide, zinc oxide and chromophores are applied as UV absorbers and reflectors in sunscreen. Absorption bands of interest to the atmospheric physicist In oxygen: the Hopfield bands, very strong, between about 67 and 100 nanometres in the ultraviolet (named after John J. Hopfield); a diffuse system between 101.9 and 130 nanometres; the Schumann–Runge continuum, very strong, between 135 and 176 nanometres; the Schumann–Runge bands between 176 and 192.6 nanometres (named for Victor Schumann and Carl Runge); the Herzberg bands between 240 and 260 nanometres (named after Gerhard Herzberg); the atmospheric bands between 538 and 771 nanometres in the visible spectrum; including the oxygen δ (~580 nm), γ (~629 nm), B (~688 nm), and A-band (~759-771 nm) a system in the infrared at about 1000 nanometres. In ozone: the Hartley bands between 200 and 300 nanometres in the ultraviolet, with a very intense maximum absorption at 255 nanometres (named after Walter Noel Hartley); the Huggins bands, weak absorption between 320 and 360 nanometres (named after Sir William Huggins); the Chappuis bands (sometimes misspelled "Chappius"), a weak diffuse system between 375 and 650 nanometres in the visible spectrum (named after J. Chappuis); and the Wulf bands in the infrared beyond 700 nm, centered at 4,700, 9,600 and 14,100 nanometres, the latter being the most intense (named after Oliver R. Wulf). In nitrogen: The Lyman–Birge–Hopfield bands, sometimes known as the Birge–Hopfield bands, in the far ultraviolet: 140– 170 nm (named after Theodore Lyman, Raymond T. Birge, and John J. Hopfield) See also Franck–Condon principle Spectroscopy Spectral line References Spectroscopy
Absorption band
[ "Physics", "Chemistry" ]
1,601
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
1,551,195
https://en.wikipedia.org/wiki/Lead%20chamber%20process
The lead chamber process was an industrial method used to produce sulfuric acid in large quantities. It has been largely supplanted by the contact process. In 1746 in Birmingham, England, John Roebuck began producing sulfuric acid in lead-lined chambers, which were stronger and less expensive and could be made much larger than the glass containers that had been used previously. This allowed the effective industrialization of sulfuric acid production, and with several refinements, this process remained the standard method of production for almost two centuries. The process was so robust that as late as 1946, the chamber process still accounted for 25% of sulfuric acid manufactured. History Sulfur dioxide is introduced with steam and nitrogen dioxide into large chambers lined with sheet lead where the gases are sprayed down with water and chamber acid (62–70% sulfuric acid). The sulfur dioxide and nitrogen dioxide dissolve, and over a period of approximately 30 minutes the sulfur dioxide is oxidized to sulfuric acid. The presence of nitrogen dioxide is necessary for the reaction to proceed at a reasonable rate. The process is highly exothermic, and a major consideration of the design of the chambers was to provide a way to dissipate the heat formed in the reactions. Early plants used very large lead-lined wooden rectangular chambers (Faulding box chambers) that were cooled by ambient air. The internal lead sheathing served to contain the corrosive sulfuric acid and to render the wooden chambers waterproof. In the 1820s-1830s, French chemist Joseph Louis Gay-Lussac (simultaneously and likely in collaboration with William Gossage) realized that it is not the bulk of liquid determining the speed of reaction but the internal area of the chamber, so he redesigned the chambers as stoneware packed masonry cylinders, which was an early example of the packed bed. In the 20th century, plants using Mills-Packard chambers supplanted the earlier designs. These chambers were tall tapered cylinders that were externally cooled by water flowing down the outside surface of the chamber. Sulfur dioxide for the process was provided by burning elemental sulfur or by the roasting of sulfur-containing metal ores in a stream of air in a furnace. During the early period of manufacture, nitrogen oxides were produced by the decomposition of niter at high temperature in the presence of acid, but this process was gradually supplanted by the air oxidation of ammonia to nitric oxide in the presence of a catalyst. The recovery and reuse of oxides of nitrogen was an important economic consideration in the operation of a chamber process plant. In the reaction chambers, nitric oxide reacts with oxygen to produce nitrogen dioxide. Liquid from the bottom of the chambers is diluted and pumped to the top of the chamber, and sprayed downward in a fine mist. Sulfur dioxide and nitrogen dioxide are absorbed in the liquid, and react to form sulfuric acid and nitric oxide. The liberated nitric oxide is sparingly soluble in water, and returns to the gas in the chamber where it reacts with oxygen in the air to reform nitrogen dioxide. Some percentage of the nitrogen oxides is sequestered in the reaction liquor as nitrosylsulfuric acid and as nitric acid, so fresh nitric oxide must be added as the process proceeds. Later versions of chamber plants included a high-temperature Glover tower to recover the nitrogen oxides from the chamber liquor, while concentrating the chamber acid to as much as 78% H2SO4. Exhaust gases from the chambers are scrubbed by passing them into a tower, through which some of the Glover acid flows over broken tile. Nitrogen oxides are absorbed to form nitrosylsulfuric acid, which is then returned to the Glover tower to reclaim the oxides of nitrogen. Sulfuric acid produced in the reaction chambers is limited to about 35% concentration. At higher concentrations, nitrosylsulfuric acid precipitates upon the lead walls in the form of 'chamber crystals', and is no longer able to catalyze the oxidation reactions. Chemistry Sulfur dioxide is generated by burning elemental sulfur or by roasting pyritic ore in a current of air: S8 + 8 O2 → 8 SO2 4 FeS2 + 11 O2 → 2 Fe2O3 + 8 SO2 Nitrogen oxides are produced by decomposition of niter in the presence of sulfuric acid, or by hydrolysis of nitrosylsulfuric acid: 2 NaNO3 + H2SO4 → Na2SO4 + H2O + NO + NO2 + O2 2 NOHSO4 + H2O → 2 H2SO4 + NO + NO2 In the reaction chambers, sulfur dioxide and nitrogen dioxide dissolve in the reaction liquor. Nitrogen dioxide is hydrated to produce nitrous acid, which then oxidizes the sulfur dioxide to sulfuric acid and nitric oxide. The reactions are not well characterized, but it is known that nitrosylsulfuric acid is an intermediate in at least one pathway. The major overall reactions are: 2 NO2 + H2O → HNO2 + HNO3 SO2 (aq) + HNO3 → NOHSO4 NOHSO4 + HNO2 → H2SO4 + NO2 + NO SO2 (aq) + 2 HNO2 → H2SO4 + 2 NO Nitric oxide escapes from the reaction liquor and is subsequently reoxidized by molecular oxygen to nitrogen dioxide. This is the overall rate determining step in the process: 2 NO + O2 → 2 NO2 Nitrogen oxides are absorbed and regenerated in the process, and thus serve as a catalyst for the overall reaction: 2 SO2 + 2 H2O + O2 → 2 H2SO4 References Further reading External links Process flow sheet of sulphuric acid manufacturing by lead chamber process Industrial processes Lead Sulfur Catalysis
Lead chamber process
[ "Chemistry" ]
1,203
[ "Catalysis", "Chemical kinetics" ]
4,734
https://en.wikipedia.org/wiki/Bernoulli%27s%20inequality
In mathematics, Bernoulli's inequality (named after Jacob Bernoulli) is an inequality that approximates exponentiations of . It is often employed in real analysis. It has several useful variants: Integer exponent Case 1: for every integer and real number . The inequality is strict if and . Case 2: for every integer and every real number . Case 3: for every even integer and every real number . Real exponent for every real number and . The inequality is strict if and . for every real number and . History Jacob Bernoulli first published the inequality in his treatise "Positiones Arithmeticae de Seriebus Infinitis" (Basel, 1689), where he used the inequality often. According to Joseph E. Hofmann, Über die Exercitatio Geometrica des M. A. Ricci (1963), p. 177, the inequality is actually due to Sluse in his Mesolabum (1668 edition), Chapter IV "De maximis & minimis". Proof for integer exponent The first case has a simple inductive proof: Suppose the statement is true for : Then it follows that Bernoulli's inequality can be proved for case 2, in which is a non-negative integer and , using mathematical induction in the following form: we prove the inequality for , from validity for some r we deduce validity for . For , is equivalent to which is true. Similarly, for we have Now suppose the statement is true for : Then it follows that since as well as . By the modified induction we conclude the statement is true for every non-negative integer . By noting that if , then is negative gives case 3. Generalizations Generalization of exponent The exponent can be generalized to an arbitrary real number as follows: if , then for or , and for . This generalization can be proved by comparing derivatives. The strict versions of these inequalities require and . Generalization of base Instead of the inequality holds also in the form where are real numbers, all greater than , all with the same sign. Bernoulli's inequality is a special case when . This generalized inequality can be proved by mathematical induction. In the first step we take . In this case the inequality is obviously true. In the second step we assume validity of the inequality for numbers and deduce validity for numbers. We assume thatis valid. After multiplying both sides with a positive number we get: As all have the same sign, the products are all positive numbers. So the quantity on the right-hand side can be bounded as follows:what was to be shown. Strengthened version The following theorem presents a strengthened version of the Bernoulli inequality, incorporating additional terms to refine the estimate under specific conditions. Let the expoent be a nonnegative integer and let be a real number with if is odd and greater than 1. Then with equality if and only if or . Related inequalities The following inequality estimates the -th power of from the other side. For any real numbers and with , one has where 2.718.... This may be proved using the inequality Alternative form An alternative form of Bernoulli's inequality for and is: This can be proved (for any integer ) by using the formula for geometric series: (using ) or equivalently Alternative proofs Arithmetic and geometric means An elementary proof for and can be given using weighted AM-GM. Let be two non-negative real constants. By weighted AM-GM on with weights respectively, we get Note that and so our inequality is equivalent to After substituting (bearing in mind that this implies ) our inequality turns into which is Bernoulli's inequality. Geometric series Bernoulli's inequality is equivalent to and by the formula for geometric series (using y = 1 + x) we get which leads to Now if then by monotony of the powers each summand , and therefore their sum is greater and hence the product on the LHS of (). If then by the same arguments and thus all addends are non-positive and hence so is their sum. Since the product of two non-positive numbers is non-negative, we get again (). Binomial theorem One can prove Bernoulli's inequality for x ≥ 0 using the binomial theorem. It is true trivially for r = 0, so suppose r is a positive integer. Then Clearly and hence as required. Using convexity For the function is strictly convex. Therefore, for holds and the reversed inequality is valid for and . Another way of using convexity is to re-cast the desired inequality to for real and real . This inequality can be proved using the fact that the function is concave, and then using Jensen's inequality in the form to give: which is the desired inequality. Notes References External links Bernoulli Inequality by Chris Boucher, Wolfram Demonstrations Project. Inequalities
Bernoulli's inequality
[ "Mathematics" ]
1,015
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
4,746
https://en.wikipedia.org/wiki/Plague%20%28disease%29
Plague is an infectious disease caused by the bacterium Yersinia pestis. Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. There are three forms of plague, each affecting a different part of the body and causing associated symptoms. Pneumonic plague infects the lungs, causing shortness of breath, coughing and chest pain; bubonic plague affects the lymph nodes, making them swell; and septicemic plague infects the blood and can cause tissues to turn black and die. The bubonic and septicemic forms are generally spread by flea bites or handling an infected animal, whereas pneumonic plague is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum. Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%. Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths in Europe. Signs and symptoms There are several different clinical manifestations of plague. The most common form is bubonic plague, followed by septicemic and pneumonic plague. Other clinical manifestations include plague meningitis, plague pharyngitis, and ocular plague. General symptoms of plague include fever, chills, headaches, and nausea. Many people experience swelling in their lymph nodes if they have bubonic plague. For those with pneumonic plague, symptoms may (or may not) include a cough, pain in the chest, and haemoptysis. Bubonic plague When a flea bites a human and contaminates the wound with regurgitated blood, the plague-causing bacteria are passed into the tissue. Y. pestis can reproduce inside cells, so even if phagocytosed, they can still survive. Once in the body, the bacteria can enter the lymphatic system, which drains interstitial fluid. Plague bacteria secrete several toxins, one of which is known to cause beta-adrenergic blockade. Y. pestis spreads through the lymphatic vessels of the infected human until it reaches a lymph node, where it causes acute lymphadenitis. The swollen lymph nodes form the characteristic buboes associated with the disease, and autopsies of these buboes have revealed them to be mostly hemorrhagic or necrotic. If the lymph node is overwhelmed, the infection can pass into the bloodstream, causing secondary septicemic plague and if the lungs are seeded, it can cause secondary pneumonic plague. Septicemic plague Lymphatics ultimately drain into the bloodstream, so the plague bacteria may enter the blood and travel to almost any part of the body. In septicemic plague, bacterial endotoxins cause disseminated intravascular coagulation (DIC), causing tiny clots throughout the body and possibly ischemic necrosis (tissue death due to lack of circulation/perfusion to that tissue) from the clots. DIC results in depletion of the body's clotting resources so that it can no longer control bleeding. Consequently, there is bleeding into the skin and other organs, which can cause red and/or black patchy rash and hemoptysis/hematemesis (coughing up/ vomiting of blood). There are bumps on the skin that look somewhat like insect bites; these are usually red, and sometimes white in the centre. Untreated, the septicemic plague is usually fatal. Early treatment with antibiotics reduces the mortality rate to between 4 and 15 per cent. Pneumonic plague The pneumonic form of plague arises from infection of the lungs. It causes coughing and thereby produces airborne droplets that contain bacterial cells and are likely to infect anyone inhaling them. The incubation period for pneumonic plague is short, usually two to four days, but sometimes just a few hours. The initial signs are indistinguishable from several other respiratory illnesses; they include headache, weakness, and spitting or vomiting of blood. The course of the disease is rapid; unless diagnosed and treated soon enough, typically within a few hours, death may follow in one to six days; in untreated cases, mortality is nearly 100%. Cause Transmission of Y. pestis to an uninfected individual is possible by any of the following means: droplet contact – coughing or sneezing on another person direct physical contact – touching an infected person, including sexual contact indirect contact – usually by touching soil contamination or a contaminated surface airborne transmission – if the microorganism can remain in the air for long periods fecal-oral transmission – usually from contaminated food or water sources vector borne transmission – carried by insects or other animals. Yersinia pestis circulates in animal reservoirs, particularly in rodents, in the natural foci of infection found on all continents except Australia. The natural foci of plague are situated in a broad belt in the tropical and sub-tropical latitudes and the warmer parts of the temperate latitudes around the globe, between the parallels 55° N and 40° S. Contrary to popular belief, rats did not directly start the spread of the bubonic plague. It is mainly a disease in the fleas (Xenopsylla cheopis) that infested the rats, making the rats themselves the first victims of the plague. Rodent-borne infection in a human occurs when a person is bitten by a flea that has been infected by biting a rodent that itself has been infected by the bite of a flea carrying the disease. The bacteria multiply inside the flea, sticking together to form a plug that blocks its stomach and causes it to starve. The flea then bites a host and continues to feed, even though it cannot quell its hunger, and consequently, the flea vomits blood tainted with the bacteria back into the bite wound. The bubonic plague bacterium then infects a new person and the flea eventually dies from starvation. Serious outbreaks of plague are usually started by other disease outbreaks in rodents or a rise in the rodent population. A 21st-century study of a 1665 outbreak of plague in the village of Eyam in England's Derbyshire Dales – which isolated itself during the outbreak, facilitating modern study – found that three-quarters of cases are likely to have been due to human-to-human transmission, especially within families, a much larger proportion than previously thought. Diagnosis Symptoms of plague are usually non-specific and to definitively diagnose plague, laboratory testing is required. Y. pestis can be identified through both a microscope and by culturing a sample and this is used as a reference standard to confirm that a person has a case of plague. The sample can be obtained from the blood, mucus (sputum), or aspirate extracted from inflamed lymph nodes (buboes). If a person is administered antibiotics before a sample is taken or if there is a delay in transporting the person's sample to a laboratory and/or a poorly stored sample, there is a possibility for false negative results. Polymerase chain reaction (PCR) may also be used to diagnose plague, by detecting the presence of bacterial genes such as the pla gene (plasmogen activator) and caf1 gene, (F1 capsule antigen). PCR testing requires a very small sample and is effective for both alive and dead bacteria. For this reason, if a person receives antibiotics before a sample is collected for laboratory testing, they may have a false negative culture and a positive PCR result. Blood tests to detect antibodies against Y. pestis can also be used to diagnose plague, however, this requires taking blood samples at different periods to detect differences between the acute and convalescent phases of F1 antibody titres. In 2020, a study about rapid diagnostic tests that detect the F1 capsule antigen (F1RDT) by sampling sputum or bubo aspirate was released. Results show rapid diagnostic F1RDT test can be used for people who have suspected pneumonic and bubonic plague but cannot be used in asymptomatic people. F1RDT may be useful in providing a fast result for prompt treatment and fast public health response as studies suggest that F1RDT is highly sensitive for both pneumonic and bubonic plague. However, when using the rapid test, both positive and negative results need to be confirmed to establish or reject the diagnosis of a confirmed case of plague and the test result needs to be interpreted within the epidemiological context as study findings indicate that although 40 out of 40 people who had the plague in a population of 1000 were correctly diagnosed, 317 people were diagnosed falsely as positive. Prevention Vaccination Bacteriologist Waldemar Haffkine developed the first plague vaccine in 1897. He conducted a massive inoculation program in British India, and it is estimated that 26 million doses of Haffkine's anti-plague vaccine were sent out from Bombay between 1897 and 1925, reducing the plague mortality by 50–85%. Since human plague is rare in most parts of the world as of 2023, routine vaccination is not needed other than for those at particularly high risk of exposure, nor for people living in areas with enzootic plague, meaning it occurs at regular, predictable rates in populations and specific areas, such as the western United States. It is not even indicated for most travellers to countries with known recent reported cases, particularly if their travel is limited to urban areas with modern hotels. The United States CDC thus only recommends vaccination for (1) all laboratory and field personnel who are working with Y. pestis organisms resistant to antimicrobials: (2) people engaged in aerosol experiments with Y. pestis; and (3) people engaged in field operations in areas with enzootic plague where preventing exposure is not possible (such as some disaster areas). A systematic review by the Cochrane Collaboration found no studies of sufficient quality to make any statement on the efficacy of the vaccine. Early diagnosis Diagnosing plague early leads to a decrease in transmission or spread of the disease. Prophylaxis Pre-exposure prophylaxis for first responders and health care providers who will care for patients with pneumonic plague is not considered necessary as long as standard and droplet precautions can be maintained. In cases of surgical mask shortages, patient overcrowding, poor ventilation in hospital wards, or other crises, pre-exposure prophylaxis might be warranted if sufficient supplies of antimicrobials are available. Postexposure prophylaxis should be considered for people who had close (<6 feet), sustained contact with a patient with pneumonic plague and were not wearing adequate personal protective equipment. Antimicrobial postexposure prophylaxis also can be considered for laboratory workers accidentally exposed to infectious materials and people who had close (<6 feet) or direct contact with infected animals, such as veterinary staff, pet owners, and hunters. Specific recommendations on pre- and post-exposure prophylaxis are available in the clinical guidelines on treatment and prophylaxis of plague published in 2021. Treatments If diagnosed in time, the various forms of plague are usually highly responsive to antibiotic therapy. The antibiotics often used are streptomycin, chloramphenicol and tetracycline. Amongst the newer generation of antibiotics, gentamicin and doxycycline have proven effective in monotherapeutic treatment of plague. Guidelines on treatment and prophylaxis of plague were published by the Centers for Disease Control and Prevention in 2021. The plague bacterium could develop drug resistance and again become a major health threat. One case of a drug-resistant form of the bacterium was found in Madagascar in 1995. Further outbreaks in Madagascar were reported in November 2014 and October 2017. Epidemiology Globally about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century which resulted in more than 50 million dead. In recent years, cases have been distributed between small seasonal outbreaks which occur primarily in Madagascar, and sporadic outbreaks or isolated cases in endemic areas. In 2022 the possible origin of all modern strands of Yersinia pestis DNA was found in human remains in three graves located in Kyrgyzstan, dated to 1338 and 1339. The siege of Caffa in Crimea in 1346, is known to have been the first plague outbreak with following strands, later to spread over Europe. Sequencing DNA compared to other ancient and modern strands paints a family tree of the bacteria. Bacteria today affecting marmots in Kyrgyzstan, are closest to the strand found in the graves, suggesting this is also the location where plague transferred from animals to humans. Biological weapon The plague has a long history as a biological weapon. Historical accounts from ancient China and medieval Europe details the use of infected animal carcasses, such as cows or horses, and human carcasses, by the Xiongnu/Huns, Mongols, Turks and other groups, to contaminate enemy water supplies. Han dynasty general Huo Qubing is recorded to have died of such contamination while engaging in warfare against the Xiongnu. Plague victims were also reported to have been tossed by catapult into cities under siege. In 1347, the Genoese possession of Caffa, a great trade emporium on the Crimean peninsula, came under siege by an army of Mongol warriors of the Golden Horde under the command of Jani Beg. After a protracted siege during which the Mongol army was reportedly withering from the disease, they decided to use the infected corpses as a biological weapon. The corpses were catapulted over the city walls, infecting the inhabitants. This event might have led to the transfer of the Black Death via their ships into the south of Europe, possibly explaining its rapid spread. During World War II, the Japanese Army developed weaponized plague, based on the breeding and release of large numbers of fleas. During the Japanese occupation of Manchuria, Unit 731 deliberately infected Chinese, Korean and Manchurian civilians and prisoners of war with the plague bacterium. These subjects, termed "maruta" or "logs", were then studied by dissection, others by vivisection while still conscious. Members of the unit such as Shiro Ishii were exonerated from the Tokyo tribunal by Douglas MacArthur but 12 of them were prosecuted in the Khabarovsk War Crime Trials in 1949 during which some admitted having spread bubonic plague within a radius around the city of Changde. Ishii innovated bombs containing live mice and fleas, with very small explosive loads, to deliver the weaponized microbes, overcoming the problem of the explosive killing the infected animal and insect by the use of a ceramic, rather than metal, casing for the warhead. While no records survive of the actual usage of the ceramic shells, prototypes exist and are believed to have been used in experiments during WWII. After World War II, both the United States and the Soviet Union developed means of weaponising pneumonic plague. Experiments included various delivery methods, vacuum drying, sizing the bacterium, developing strains resistant to antibiotics, combining the bacterium with other diseases (such as diphtheria), and genetic engineering. Scientists who worked in USSR bio-weapons programs have stated that the Soviet effort was formidable and that large stocks of weaponised plague bacteria were produced. Information on many of the Soviet and US projects is largely unavailable. Aerosolized pneumonic plague remains the most significant threat. The plague can be easily treated with antibiotics. Some countries, such as the United States, have large supplies on hand if such an attack should occur, making the threat less severe. See also Timeline of plague References Further reading External links WHO Health topic CDC Plague map world distribution, publications, information on bioterrorism preparedness and response regarding plague Symptoms, causes, pictures of bubonic plague Airborne diseases Bacterium-related cutaneous conditions Biological agents Epidemics Insect-borne diseases Rodent-carried diseases Zoonoses Zoonotic bacterial diseases Cat diseases Wikipedia medicine articles ready to translate
Plague (disease)
[ "Biology", "Environmental_science" ]
3,518
[ "Biological agents", "Toxicology", "Biological warfare" ]
4,775
https://en.wikipedia.org/wiki/British%20Standards
British Standards (BS) are the standards produced by the BSI Group which is incorporated under a royal charter and which is formally designated as the national standards body (NSB) for the UK. The BSI Group produces British Standards under the authority of the charter, which lays down as one of the BSI's objectives to: Formally, as stated in a 2002 memorandum of understanding between the BSI and the United Kingdom Government, British Standards are defined as: Products and services which BSI certifies as having met the requirements of specific standards within designated schemes are awarded the Kitemark. History BSI Group began in 1901 as the Engineering Standards Committee, led by James Mansergh, to standardize the number and type of steel sections, in order to make British manufacturers more efficient and competitive. Over time the standards developed to cover many aspects of tangible engineering, and then engineering methodologies including quality systems, safety and security. Creation The BSI Group as a whole does not produce British Standards, as standards work within the BSI is decentralized. The governing board of BSI establishes a Standards Board. The Standards Board does little apart from setting up sector boards (a sector in BSI parlance being a field of standardization such as ICT, quality, agriculture, manufacturing, or fire). Each sector board, in turn, constitutes several technical committees. It is the technical committees that, formally, approve a British Standard, which is then presented to the secretary of the supervisory sector board for endorsement of the fact that the technical committee has indeed completed a task for which it was constituted. Standards The standards produced are titled British Standard XXXX[-P]:YYYY where XXXX is the number of the standard, P is the number of the part of the standard (where the standard is split into multiple parts) and YYYY is the year in which the standard came into effect. BSI Group currently has over 27,000 active standards. Products are commonly specified as meeting a particular British Standard, and in general, this can be done without any certification or independent testing. The standard simply provides a shorthand way of claiming that certain specifications are met, while encouraging manufacturers to adhere to a common method for such a specification. The Kitemark can be used to indicate certification by BSI, but only where a Kitemark scheme has been set up around a particular standard. It is mainly applicable to safety and quality management standards. There is a common misunderstanding that Kitemarks are necessary to prove compliance with any BS standard, but in general, it is neither desirable nor possible that every standard be 'policed' in this way. Following the move on harmonization of the standard in Europe, some British Standards are gradually being superseded or replaced by the relevant European Standards (EN). Status of standards Standards are continuously reviewed and developed and are periodically allocated one or more of the following status keywords. Confirmed - the standard has been reviewed and confirmed as being current. Current - the document is the current, most recently published one available. Draft for public comment/DPC - a national stage in the development of a standard, where wider consultation is sought within the UK. Obsolescent - indicating by amendment that the standard is not recommended for use for new equipment, but needs to be retained to provide for the servicing of equipment that is expected to have a long working life, or due to legislative issues. Partially replaced - the standard has been partially replaced by one or more other standards. Proposed for confirmation - the standard is being reviewed and it has been proposed that it is confirmed as the current standard. Proposed for obsolescence - the standard is being reviewed and it has been proposed that it is made obsolescent. Proposed for withdrawal - the standard is being reviewed and it has been proposed that it is withdrawn. Revised - the standard has been revised. Superseded - the standard has been replaced by one or more other standards. Under review - the standard is under review. Withdrawn - the document is no longer current and has been withdrawn. Work in hand - there is work being undertaken on the standard and there may be a related draft for public comment available. Examples BS 0 A standard for standards specifies development, structure and drafting of standards. BS 1 Lists of rolled sections for structural purposes BS 2 Specification and sections of tramway rails and fishplates BS 3 Report on influence of gauge length and section of test bar on the percentage of elongation BS 4 Specification for structural steel sections BS 5 Report on locomotives for Indian railways BS 7 Dimensions of copper conductors insulated annealled, for electric power and light BS 9 Specifications for bullhead railway rails BS 11 Specifications and sections of Flat Bottom railway rails BS 12 Specification for Portland Cement BS 15 Specification for structural steel for bridges, etc., and general building construction BS 16 Specification for telegraph material (insulators, pole fittings, et cetera) BS 17 Interim report on electrical machinery BS 22 Report on effect of temperature on insulating materials BS 24 Specifications for material used in the construction of standards for railway rolling stock BS 26 Second report on locomotives for Indian Railways (Superseding No 5) BS 27 Report on standard systems of limit gauges for running fits BS 28 Report on nuts, bolt heads and spanners BS 31 Specification for steel conduits for electrical wiring BS 32 Specification for steel bars for use in automatic machines BS 33 Carbon filament electric lamps BS 34 Tables of BS Whitworth, BS Fine and BS Pipe Threads BS 35 Specification for Copper Alloy Bars for use in Automatic Machines BS 36 Report on British Standards for Electrical Machinery BS 37 Specification for Electricity Meters BS 38 Report on British Standards Systems for Limit Gauges for Screw Threads BS 42 Report on reciprocating steam engines for electrical purposes BS 43 Specification for charcoal iron lip-welded boiler tubes BS 45 Report on Dimensions for Sparking Plugs (for Internal Combustion Engines) BS 47 Steel Fishplates for Bullhead and Flat Bottom Railway Rails, Specification and Sections of BS 49 Specification for Ammetres and Voltmetres BS 50 Third Report on Locomotives for Indian Railways (Superseding No. 5 and 26) BS 53 Specification for Cold Drawn Weldless Steel Boiler Tubes for Locomotive Boilers BS 54 Report on Screw Threads, Nuts and Bolt Heads for use in Automobile Construction BS 56 Definitions of Yield Point and Elastic Limit BS 57 Report on heads for Small Screws BS 70 Report on Pneumatic Tyre Rims for automobiles, motorcycles and bicycles BS 72 British Standardisation Rules for Electrical Machinery, BS 73 Specification for Two-Pin Wall Plugs and Sockets (Five-, Fifteen- and Thirty-Ampere) BS 76 Report of and Specifications for Tar and Pitch for Road Purposes BS 77 Specification. Voltages for a.c. transmission and distribution systems BS 80 Magnetos for automobile purposes BS 81 Specification for Instrument Transformers BS 82 Specification for Starters for Electric Motors BS 84 Report on Screw Threads (British Standard Fine), and their Tolerances (Superseding parts of Reports Nos. 20 and 33) BS 86 Report on Dimensions of Magnetos for Aircraft Purposes BS 153 Specification for Steel Girder Bridges BS 308 a now deleted standard for engineering drawing conventions, having been absorbed into BS 8888. BS 317 for Hand-Shield and Side Entry Pattern Three-Pin Wall Plugs and Sockets (Two Pin and Earth Type) BS 336 for fire hose couplings and ancillary equipment BS 372 for Side-entry wall plugs and sockets for domestic purposes (Part 1 superseded BS 73 and Part 2 superseded BS 317) BS 381 for colours used in identification, coding and other special purposes BS 476 for fire resistance of building materials/elements BS 499 Welding terms and symbols. BS 546 for Two-pole and earthing-pin plugs, socket-outlets and socket-outlet adaptors for AC (50–60 Hz) circuits up to 250V BS 857 for safety glass for land transport BS 970 Specification for wrought steels for mechanical and allied engineering purposes BS 987C Camouflage Colours BS 1011 Recommendation for welding of metallic materials BS 1088 for marine plywood BS 1192 for Construction Drawing Practice. Part 5 (BS1192-5:1998) concerns Guide for structuring and exchange of CAD data. BS 1361 for cartridge fuses for a.c. circuits in domestic and similar premises BS 1362 for cartridge fuses for BS 1363 power plugs BS 1363 for mains power plugs and sockets BS 1377 Methods of test for soils for civil engineering. BS 1380 Speed and Exposure Index of Photographic Negative Materials. BS 1572 Colours for Flat Finishes for Wall Decoration BS 1881 Testing Concrete BS 1852 Specification for marking codes for resistors and capacitors BS 2979 Transliteration of Cyrillic and Greek characters BS 3621 Thief resistant lock assembly. Key egress. BS 3943 Specification for plastics waste traps BS 4142 Methods for rating and assessing industrial and commercial sound BS 4293 for residual current-operated circuit-breakers BS 4343 for industrial electrical power connectors BS 4573 Specification for 2-pin reversible plugs and shaver socket-outlets BS 4960 for weighing instruments for domestic cookery BS 5252 for colour-coordination in building construction BS 5400 for steel, concrete and composite bridges. BS 5499 for graphical symbols and signs in building construction; including shape, colour and layout BS 5544 for anti-bandit glazing (glazing resistant to manual attack) BS 5750 for quality management, the ancestor of ISO 9000 BS 5837 for protection of trees during construction work BS 5839 for fire detection and alarm systems for buildings BS 5930 for site investigations BS 5950 for structural steel BS 5993 for Cricket balls BS 6008 for preparation of a liquor of tea for use in sensory tests BS 6312 for telephone plugs and sockets BS 6651 code of practice for protection of structures against lightning; replaced by BS EN 62305 (IEC 62305) series. BS 6879 for British geocodes, a superset of ISO 3166-2:GB BS 7430 code of practice for earthing BS 7671 Requirements for Electrical Installations, The IEE Wiring Regulations, produced by the IET. BS 7799 for information security, the ancestor of the ISO/IEC 27000 family of standards, including 27002 (formerly 17799) BS 7901 for recovery vehicles and vehicle recovery equipment BS 7909 Code of practice for temporary electrical systems for entertainment and related purposes BS 7919 Electric cables. Flexible cables rated up to 450/750 V, for use with appliances and equipment intended for industrial and similar environments BS 7910 guide to methods for assessing the acceptability of flaws in metallic structures BS 7925 Software testing BS 7971 Protective clothing and equipment for use in violent situations and in training BS 8110 for structural concrete BS 8233 Guidance on sound insulation and noise reduction in buildings BS 8484 for the provision of lone worker device services BS 8485 for the characterization and remediation from ground gas in affected developments BS 8494 for detecting and measuring carbon dioxide in ambient air or extraction systems BS 8546 Travel adaptors compatible with UK plug and socket system. BS 8888 for engineering drawing and technical product specification BS 9251 for safety guidelines on fire sprinkler systems in residential buildings BS 15000 for IT Service Management, (ITIL), now ISO/IEC 20000 BS 3G 101 for general requirements for mechanical and electromechanical aircraft indicators BS EN 12195 Load restraining on road vehicles. BS EN 60204 Safety of machinery BS EN ISO 4210 - Cycles. Safety Requirements for Bicycles PAS documents BSI also publishes a series of Publicly Available Specification (PAS) documents. PAS documents are a flexible and rapid standards development model open to all organizations. A PAS is a sponsored piece of work allowing organizations flexibility in the rapid creation of a standard while also allowing for a greater degree of control over the document's development. A typical development time frame for a PAS is around six to nine months. Once published by BSI, a PAS has all the functionality of a British Standard for the purposes of creating schemes such as management systems and product benchmarks as well as codes of practice. A PAS is a living document and after two years the document will be reviewed and a decision made with the client as to whether or not this should be taken forward to become a formal standard. The term PAS was originally an abbreviation for "product approval specification", a name which was subsequently changed to "publicly available specification". However, according to BSI, not all PAS documents are structured as specifications and the term is now sufficiently well established not to require any further amplification. Examples PAS 78: Guide to good practice in commissioning accessible websites PAS 440: Responsible Innovation – Guide PAS 9017: Plastics – Biodegradation of polyolefins in an open-air terrestrial environment – Specification PAS 1881: Assuring safety for automated vehicle trials and testing – Specification PAS 1201: Guide for describing graphene material PAS 4444: Hydrogen fired gas appliances – Guide Availability Copies of British Standards are sold at the BSI Online Shop or can be accessed via subscription to British Standards Online (BSOL). They can also be ordered via the publishing units of many other national standards bodies (ANSI, DIN, etc.) and from several specialized suppliers of technical specifications. British Standards, including European and international adoptions, are available in many university and public libraries that subscribe to the BSOL platform. Librarians and lecturers at UK-based subscribing universities have full access rights to the collection while students can copy/paste and print but not download a standard. Up to 10% of the content of a standard can be copy/pasted for personal or internal use and up to 5% of the collection made available as a paper or electronic reference collection at the subscribing university. Because of their reference material status standards are not available for interlibrary loan. Public library users in the UK may have access to BSOL on a view-only basis if their library service subscribes to the BSOL platform. Users may also be able to access the collection remotely if they have a valid library card and the library offers secure access to its resources. The BSI Knowledge Centre in Chiswick, London can be contacted directly about viewing standards in their Members' Reading Room. See also Institute for Reference Materials and Measurements (EU) References External links 1901 establishments in the United Kingdom International Electrotechnical Commission Certification marks Organizations established in 1901
British Standards
[ "Mathematics", "Engineering" ]
2,948
[ "Electrical engineering organizations", "Symbols", "International Electrotechnical Commission", "Certification marks" ]
4,816
https://en.wikipedia.org/wiki/Biosphere
The biosphere (), also called the ecosphere (), is the worldwide sum of all ecosystems. It can also be termed the zone of life on the Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago. In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons. Origin and use of the term The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells. While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences. Narrow definition Geochemists define the biosphere as being the total sum of living organisms (the "biomass" or "biota" as referred to by biologists and ecologists). In this sense, the biosphere is but one of four separate components of the geochemical model, the other three being geosphere, hydrosphere, and atmosphere. When these four component spheres are combined into one system, it is known as the ecosphere. This term was coined during the 1960s and encompasses both biological and physical components of the planet. The Second International Conference on Closed Life Systems defined biospherics as the science and technology of analogs and models of Earth's biosphere; i.e., artificial Earth-like biospheres. Others may include the creation of artificial non-Earth biospheres—for example, human-centered biospheres or a native Martian biosphere—as part of the topic of biospherics. Earth's biosphere Overview Currently, the total number of living cells on the Earth is estimated to be 1030; the total number since the beginning of Earth, as 1040, and the total number for the entire time of a habitable planet Earth as 1041. This is much larger than the total number of estimated stars (and Earth-like planets) in the observable universe as 1024, a number which is more than all the grains of beach sand on planet Earth; but less than the total number of atoms estimated in the observable universe as 1082; and the estimated total number of stars in an inflationary universe (observed and unobserved), as 10100. Age The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilized microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe." Extent Every part of the planet, from the polar ice caps to the equator, features life of some kind. Recent advances in microbiology have demonstrated that microbes live deep beneath the Earth's terrestrial surface and that the total mass of microbial life in so-called "uninhabitable zones" may, in biomass, exceed all animal and plant life on the surface. The actual thickness of the biosphere on Earth is difficult to measure. Birds typically fly at altitudes as high as and fish live as much as underwater in the Puerto Rico Trench. There are more extreme examples for life on the planet: Rüppell's vulture has been found at altitudes of ; bar-headed geese migrate at altitudes of at least ; yaks live at elevations as high as above sea level; mountain goats live up to . Herbivorous animals at these elevations depend on lichens, grasses, and herbs. Life forms live in every part of the Earth's biosphere, including soil, hot springs, inside rocks at least deep underground, and at least high in the atmosphere. Marine life under many forms has been found in the deepest reaches of the world ocean while much of the deep sea remains to be explored. Under certain test conditions, microorganisms have been observed to survive the vacuum of outer space. The total amount of soil and subsurface bacterial carbon is estimated as 5 × 1017 g. The mass of prokaryote microorganisms—which includes bacteria and archaea, but not the nucleated eukaryote microorganisms—may be as much as 0.8 trillion tons of carbon (of the total biosphere mass, estimated at between 1 and 4 trillion tons). Barophilic marine microbes have been found at more than a depth of in the Mariana Trench, the deepest spot in the Earth's oceans. In fact, single-celled life forms have been found in the deepest part of the Mariana Trench, by the Challenger Deep, at depths of . Other researchers reported related studies that microorganisms thrive inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, as well as beneath the seabed off Japan. Culturable thermophilic microbes have been extracted from cores drilled more than into the Earth's crust in Sweden, from rocks between . Temperature increases with increasing depth into the Earth's crust. The rate at which the temperature increases depends on many factors, including the type of crust (continental vs. oceanic), rock type, geographic location, etc. The greatest known temperature at which microbial life can exist is (Methanopyrus kandleri Strain 116). It is likely that the limit of life in the "deep biosphere" is defined by temperature rather than absolute depth. On 20 August 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica. Earth's biosphere is divided into several biomes, inhabited by fairly similar flora and fauna. On land, biomes are separated primarily by latitude. Terrestrial biomes lying within the Arctic and Antarctic Circles are relatively barren of plant and animal life. In contrast, most of the more populous biomes lie near the equator. Annual variation Artificial biospheres Experimental biospheres, also called closed ecological systems, have been created to study ecosystems and the potential for supporting life outside the Earth. These include spacecraft and the following terrestrial laboratories: Biosphere 2 in Arizona, United States, 3.15 acres (13,000 m2). BIOS-1, BIOS-2 and BIOS-3 at the Institute of Biophysics in Krasnoyarsk, Siberia, in what was then the Soviet Union. Biosphere J (CEEF, Closed Ecology Experiment Facilities), an experiment in Japan. Micro-Ecological Life Support System Alternative (MELiSSA) at Autonomous University of Barcelona Extraterrestrial biospheres No biospheres have been detected beyond the Earth; therefore, the existence of extraterrestrial biospheres remains hypothetical. The rare Earth hypothesis suggests they should be very rare, save ones composed of microbial life only. On the other hand, Earth analogs may be quite numerous, at least in the Milky Way galaxy, given the large number of planets. Three of the planets discovered orbiting TRAPPIST-1 could possibly contain biospheres. Given limited understanding of abiogenesis, it is currently unknown what percentage of these planets actually develop biospheres. Based on observations by the Kepler Space Telescope team, it has been calculated that provided the probability of abiogenesis is higher than 1 to 1000, the closest alien biosphere should be within 100 light-years from the Earth. It is also possible that artificial biospheres will be created in the future, for example with the terraforming of Mars. See also Biosphere model Climate system Cryosphere Habitable zone Homeostasis Life-support system Man and the Biosphere Programme Montreal Biosphere Noosphere Rare biosphere Shadow biosphere Soil biomantle Thomas Gold Wardian case Winogradsky column References Further reading The Biosphere (A Scientific American Book), San Francisco, W.H. Freeman and Co., 1970, . This book, originally the December 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources (including solar energy), population trends, and environmental degradation (including global warming). External links Article on the Biosphere at Encyclopedia of Earth GLOBIO.info, an ongoing programme to map the past, current and future impacts of human activities on the biosphere Paul Crutzen Interview, freeview video of Paul Crutzen Nobel Laureate for his work on decomposition of ozone talking to Harry Kroto Nobel Laureate by the Vega Science Trust. Atlas of the Biosphere Oceanography Superorganisms Biological systems
Biosphere
[ "Physics", "Biology", "Environmental_science" ]
2,131
[ "Superorganisms", "Hydrology", "Symbiosis", "Applied and interdisciplinary physics", "Oceanography", "nan" ]
4,817
https://en.wikipedia.org/wiki/Biological%20membrane
A biological membrane, biomembrane or cell membrane is a selectively permeable membrane that separates the interior of a cell from the external environment or creates intracellular compartments by serving as a boundary between one part of the cell and another. Biological membranes, in the form of eukaryotic cell membranes, consist of a phospholipid bilayer with embedded, integral and peripheral proteins used in communication and transportation of chemicals and ions. The bulk of lipids in a cell membrane provides a fluid matrix for proteins to rotate and laterally diffuse for physiological functioning. Proteins are adapted to high membrane fluidity environment of the lipid bilayer with the presence of an annular lipid shell, consisting of lipid molecules bound tightly to the surface of integral membrane proteins. The cell membranes are different from the isolating tissues formed by layers of cells, such as mucous membranes, basement membranes, and serous membranes. Composition Asymmetry The lipid bilayer consists of two layers- an outer leaflet and an inner leaflet. The components of bilayers are distributed unequally between the two surfaces to create asymmetry between the outer and inner surfaces. This asymmetric organization is important for cell functions such as cell signaling. The asymmetry of the biological membrane reflects the different functions of the two leaflets of the membrane. As seen in the fluid membrane model of the phospholipid bilayer, the outer leaflet and inner leaflet of the membrane are asymmetrical in their composition. Certain proteins and lipids rest only on one surface of the membrane and not the other. Both the plasma membrane and internal membranes have cytosolic and exoplasmic faces. This orientation is maintained during membrane trafficking – proteins, lipids, glycoconjugates facing the lumen of the ER and Golgi get expressed on the extracellular side of the plasma membrane. In eukaryotic cells, new phospholipids are manufactured by enzymes bound to the part of the endoplasmic reticulum membrane that faces the cytosol. These enzymes, which use free fatty acids as substrates, deposit all newly made phospholipids into the cytosolic half of the bilayer. To enable the membrane as a whole to grow evenly, half of the new phospholipid molecules then have to be transferred to the opposite monolayer. This transfer is catalyzed by enzymes called flippases. In the plasma membrane, flippases transfer specific phospholipids selectively, so that different types become concentrated in each monolayer. Using selective flippases is not the only way to produce asymmetry in lipid bilayers, however. In particular, a different mechanism operates for glycolipids—the lipids that show the most striking and consistent asymmetric distribution in animal cells. Lipids The biological membrane is made up of lipids with hydrophobic tails and hydrophilic heads. The hydrophobic tails are hydrocarbon tails whose length and saturation is important in characterizing the cell. Lipid rafts occur when lipid species and proteins aggregate in domains in the membrane. These help organize membrane components into localized areas that are involved in specific processes, such as signal transduction. Red blood cells, or erythrocytes, have a unique lipid composition. The bilayer of red blood cells is composed of cholesterol and phospholipids in equal proportions by weight. Erythrocyte membrane plays a crucial role in blood clotting. In the bilayer of red blood cells is phosphatidylserine. This is usually in the cytoplasmic side of the membrane. However, it is flipped to the outer membrane to be used during blood clotting. Proteins Phospholipid bilayers contain different proteins. These membrane proteins have various functions and characteristics and catalyze different chemical reactions. Integral proteins span the membranes with different domains on either side. Integral proteins hold strong association with the lipid bilayer and cannot easily become detached. They will dissociate only with chemical treatment that breaks the membrane. Peripheral proteins are unlike integral proteins in that they hold weak interactions with the surface of the bilayer and can easily become dissociated from the membrane. Peripheral proteins are located on only one face of a membrane and create membrane asymmetry. Oligosaccharides Oligosaccharides are sugar containing polymers. In the membrane, they can be covalently bound to lipids to form glycolipids or covalently bound to proteins to form glycoproteins. Membranes contain sugar-containing lipid molecules known as glycolipids. In the bilayer, the sugar groups of glycolipids are exposed at the cell surface, where they can form hydrogen bonds. Glycolipids provide the most extreme example of asymmetry in the lipid bilayer. Glycolipids perform a vast number of functions in the biological membrane that are mainly communicative, including cell recognition and cell-cell adhesion. Glycoproteins are integral proteins. They play an important role in the immune response and protection. Formation The phospholipid bilayer is formed due to the aggregation of membrane lipids in aqueous solutions. Aggregation is caused by the hydrophobic effect, where hydrophobic ends come into contact with each other and are sequestered away from water. This arrangement maximises hydrogen bonding between hydrophilic heads and water while minimising unfavorable contact between hydrophobic tails and water. The increase in available hydrogen bonding increases the entropy of the system, creating a spontaneous process. Function Biological molecules are amphiphilic or amphipathic, i.e. are simultaneously hydrophobic and hydrophilic. The phospholipid bilayer contains charged hydrophilic headgroups, which interact with polar water. The layers also contain hydrophobic tails, which meet with the hydrophobic tails of the complementary layer. The hydrophobic tails are usually fatty acids that differ in lengths. The interactions of lipids, especially the hydrophobic tails, determine the lipid bilayer physical properties such as fluidity. Membranes in cells typically define enclosed spaces or compartments in which cells may maintain a chemical or biochemical environment that differs from the outside. For example, the membrane around peroxisomes shields the rest of the cell from peroxides, chemicals that can be toxic to the cell, and the cell membrane separates a cell from its surrounding medium. Peroxisomes are one form of vacuole found in the cell that contain by-products of chemical reactions within the cell. Most organelles are defined by such membranes, and are called membrane-bound organelles. Selective permeability Probably the most important feature of a biomembrane is that it is a selectively permeable structure. This means that the size, charge, and other chemical properties of the atoms and molecules attempting to cross it will determine whether they succeed in doing so. Selective permeability is essential for effective separation of a cell or organelle from its surroundings. Biological membranes also have certain mechanical or elastic properties that allow them to change shape and move as required. Generally, small hydrophobic molecules can readily cross phospholipid bilayers by simple diffusion. Particles that are required for cellular function but are unable to diffuse freely across a membrane enter through a membrane transport protein or are taken in by means of endocytosis, where the membrane allows for a vacuole to join onto it and push its contents into the cell. Many types of specialized plasma membranes can separate cell from external environment: apical, basolateral, presynaptic and postsynaptic ones, membranes of flagella, cilia, microvillus, filopodia and lamellipodia, the sarcolemma of muscle cells, as well as specialized myelin and dendritic spine membranes of neurons. Plasma membranes can also form different types of "supramembrane" structures such as caveolae, postsynaptic density, podosome, invadopodium, desmosome, hemidesmosome, focal adhesion, and cell junctions. These types of membranes differ in lipid and protein composition. Distinct types of membranes also create intracellular organelles: endosome; smooth and rough endoplasmic reticulum; sarcoplasmic reticulum; Golgi apparatus; lysosome; mitochondrion (inner and outer membranes); nucleus (inner and outer membranes); peroxisome; vacuole; cytoplasmic granules; cell vesicles (phagosome, autophagosome, clathrin-coated vesicles, COPI-coated and COPII-coated vesicles) and secretory vesicles (including synaptosome, acrosomes, melanosomes, and chromaffin granules). Different types of biological membranes have diverse lipid and protein compositions. The content of membranes defines their physical and biological properties. Some components of membranes play a key role in medicine, such as the efflux pumps that pump drugs out of a cell. Fluidity The hydrophobic core of the phospholipid bilayer is constantly in motion because of rotations around the bonds of lipid tails. Hydrophobic tails of a bilayer bend and lock together. However, because of hydrogen bonding with water, the hydrophilic head groups exhibit less movement as their rotation and mobility are constrained. This results in increasing viscosity of the lipid bilayer closer to the hydrophilic heads. Below a transition temperature, a lipid bilayer loses fluidity when the highly mobile lipids exhibits less movement becoming a gel-like solid. The transition temperature depends on such components of the lipid bilayer as the hydrocarbon chain length and the saturation of its fatty acids. Temperature-dependence fluidity constitutes an important physiological attribute for bacteria and cold-blooded organisms. These organisms maintain a constant fluidity by modifying membrane lipid fatty acid composition in accordance with differing temperatures. In animal cells, membrane fluidity is modulated by the inclusion of the sterol cholesterol. This molecule is present in especially large amounts in the plasma membrane, where it constitutes approximately 20% of the lipids in the membrane by weight. Because cholesterol molecules are short and rigid, they fill the spaces between neighboring phospholipid molecules left by the kinks in their unsaturated hydrocarbon tails. In this way, cholesterol tends to stiffen the bilayer, making it more rigid and less permeable. For all cells, membrane fluidity is important for many reasons. It enables membrane proteins to diffuse rapidly in the plane of the bilayer and to interact with one another, as is crucial, for example, in cell signaling. It permits membrane lipids and proteins to diffuse from sites where they are inserted into the bilayer after their synthesis to other regions of the cell. It allows membranes to fuse with one another and mix their molecules, and it ensures that membrane molecules are distributed evenly between daughter cells when a cell divides. If biological membranes were not fluid, it is hard to imagine how cells could live, grow, and reproduce. The fluidity property is at the center of the Helfrich model which allows for calculating the energy cost of an elastic deformation to the membrane. See also Collodion bag Fluid mosaic model Osmosis Membrane biology Soft matter References External links membrane Soft matter
Biological membrane
[ "Physics", "Chemistry", "Materials_science" ]
2,382
[ "Membrane biology", "Soft matter", "Condensed matter physics", "Molecular biology" ]
4,827
https://en.wikipedia.org/wiki/Biomedical%20engineering
Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer. Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals. Subfields and related fields Bioinformatics Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data. Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences. Biomechanics Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomaterials A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science. Biomedical optics Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging. Tissue engineering Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME. One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct. Genetic engineering Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research. Neural engineering Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices. Pharmaceutical engineering Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of chemical engineering, and pharmaceutical analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment. Hospital and medical devices This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism. A medical device is intended for use in: the diagnosis of disease or other conditions in the cure, mitigation, treatment, or prevention of disease. Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants. Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases. Medical devices are regulated and classified (in the US) as follows (see also Regulation): Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment. Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes. Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants. Medical imaging Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means. Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract. Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy. Medical implants An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents. Bionics Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools. Biomedical sensors In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals. Clinical engineering Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly. Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items. Rehabilitation engineering Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community. While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University. The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation. Regulatory issues Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death" Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe and 2) effective and 3) for all the manufactured devices (why is this part deleted?) A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it. A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device. The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle. The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices). In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide. In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area. The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments. RoHS II Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2. RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled. The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products. IEC 60601 The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series. The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard. AS/NZS 3551:2012 AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards. The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning. Training and certification Education Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology. In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET. In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University. As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program. Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards. Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework). Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME. Licensure/certification As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine. Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required. In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM. The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure. Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers. Career prospects In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions. Now as of 2023, there are 19,700 jobs for this degree, the average pay for a person in this field is around $100,730.00 and making around $48.43 an hour. There is also expected to be a 7% increase in jobs from here 2023 to 2033 (even faster than the last average). Notable figures Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic. Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR). Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University. Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering. J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES) Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively. P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES) Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology. Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons) Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals Fred Weibell, coauthor of Biomedical Instrumentation and Measurements U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings See also Biomedical Engineering and Instrumentation Program (BEIP) References 45. ^Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, "Bioengineers and Biomedical Engineers", retrieved October 27, 2024. Further reading External links
Biomedical engineering
[ "Engineering", "Biology" ]
6,530
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
4,831
https://en.wikipedia.org/wiki/Bohr%20model
In atomic physics, the Bohr model or Rutherford–Bohr model was the first successful model of the atom. Developed from 1911 to 1918 by Niels Bohr and building on Ernest Rutherford's nuclear model, it supplanted the plum pudding model of J J Thomson only to be replaced by the quantum atomic model in the 1920s. It consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values). In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics. The model's key success lies in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results. The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory. Background Until the second decade of the 20th century, atomic models were generally speculative. Even the concept of atoms, let alone atoms with internal structure, faced opposition from some scientists. Planetary models In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons. These models faced a significant constraint. In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitable arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements. Thomson's atom model When Bohr began his work on a new atomic theory in the summer of 1912 the atomic model proposed by J J Thomson, now known as the Plum pudding model, was the best available. Thomson proposed a model with electrons rotating in coplanar rings within an atomic-sized, positively-charged, spherical volume. Thomson showed that this model was mechanically stable by lengthy calculations and was electrodynamically stable under his original assumption of thousands of electrons per atom. Moreover, he suggested that the particularly stable configurations of electrons in rings was connected to chemical properties of the atoms. He developed a formula for the scattering of beta particles that seemed to match experimental results. However Thomson himself later showed that the atom had a factor of a thousand fewer electrons, challenging the stability argument and forcing the poorly understood positive sphere to have most of the atom's mass. Thomson was also unable to explain the many lines in atomic spectra. Rutherford nuclear model In 1908, Hans Geiger and Ernest Marsden demonstrated that alpha particle occasionally scatter at large angles, a result inconsistent with Thomson's model. In 1911 Ernest Rutherford developed a new scattering model, showing that the observed large angle scattering could be explained by a compact, highly charged mass at the center of the atom. Rutherford scattering did not involve the electrons and thus his model of the atom was incomplete. Bohr begins his first paper on his atomic model by describing Rutherford's atom as consisting of a small, dense, positively charged nucleus attracting negatively charged electrons. Atomic spectra By the early twentieth century, it was expected that the atom would account for the many atomic spectral lines. These lines were summarized in empirical formula by Johann Balmer and Johannes Rydberg. In 1897, Lord Rayleigh showed that vibrations of electrical systems predicted spectral lines that depend on the square of the vibrational frequency, contradicting the empirical formula which depended directly on the frequency. In 1907 Arthur W. Conway showed that, rather than the entire atom vibrating, vibrations of only one of the electrons in the system described by Thomson might be sufficient to account for spectral series. Although Bohr's model would also rely on just the electron to explain the spectrum, he did not assume an electrodynamical model for the atom. The other important advance in the understanding of atomic spectra was the Rydberg–Ritz combination principle which related atomic spectral line frequencies to differences between 'terms', special frequencies characteristic of each element. Bohr would recognize the terms as energy levels of the atom divided by the Planck constant, leading to the modern view that the spectral lines result from energy differences. Haas atomic model In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy, , on a sphere of radius to equal the frequency, , of the electron's orbit on the sphere times the Planck constant: where represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force: where is the mass of the electron. This combination relates the radius of the sphere to the Planck constant: Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom. Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, , the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius. Influence of the Solvay Conference The first Solvay Conference, in 1911, was one of the first international physics conferences. Nine Nobel or future Nobel laureates attended, including Ernest Rutherford, Bohr's mentor. Bohr did not attend but he read the Solvay reports and discussed them with Rutherford. The subject of the conference was the theory of radiation and the energy quanta of Max Planck's oscillators. Planck's lecture at the conference ended with comments about atoms and the discussion that followed it concerned atomic models. Hendrik Lorentz raised the question of the composition of the atom based on Haas's model, a form of Thomson's plum pudding model with a quantum modification. Lorentz explained that the size of atoms could be taken to determine the Planck constant as Haas had done or the Planck constant could be taken as determining the size of atoms. Bohr would adopt the second path. The discussions outlined the need for the quantum theory to be included in the atom. Planck explicitly mentions the failings of classical mechanics. While Bohr had already expressed a similar opinion in his PhD thesis, at Solvay the leading scientists of the day discussed a break with classical theories. Bohr's first paper on his atomic model cites the Solvay proceedings saying: "Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i.e. Planck's constant, or as it often is called the elementary quantum of action." Encouraged by the Solvay discussions, Bohr would assume the atom was stable and abandon the efforts to stabilize classical models of the atom Nicholson atom theory In 1911 John William Nicholson published a model of the atom which would influence Bohr's model. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant. Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency. This new concept gave Planck constant an atomic meaning for the first time. In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom. The other critical influence of Nicholson work was his detailed analysis of spectra. Before Nicholson's work Bohr thought the spectral data was not useful for understanding atoms. In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail. Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit. By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair. Bohr's atomic model would abandon classical electrodynamics. Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency. Bohr's previous work Bohr completed his PhD in 1911 with a thesis 'Studies on the Electron Theory of Metals', an application of the classical electron theory of Hendrik Lorentz. Bohr noted two deficits of the classical model. The first concerned the specific heat of metals which James Clerk Maxwell noted in 1875: every additional degree of freedom in a theory of metals, like subatomic electrons, cause more disagreement with experiment. The second, the classical theory could not explain magnetism. After his PhD, Bohr worked briefly in the lab of JJ Thomson before moving to Rutherford's lab in Manchester to study radioactivity. He arrived just after Rutherford completed his proposal of a compact nuclear core for atoms. Charles Galton Darwin, also at Manchester, had just completed an analysis of alpha particle energy loss in metals, concluding the electron collisions where the dominant cause of loss. Bohr showed in a subsequent paper that Darwin's results would improve by accounting for electron binding energy. Importantly this allowed Bohr to conclude that hydrogen atoms have a single electron. Development Next, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888, resulting in what is now known as the Rydberg formula. After this, Bohr declared, "everything became clear". In 1913 Niels Bohr put forth three postulates to provide an electron model consistent with Rutherford's nuclear model: The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones. The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant: , where is called the principal quantum number, and . The lowest value of is 1; this gives the smallest possible orbital radius, known as the Bohr radius, of 0.0529 nm for hydrogen. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation. Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency determined by the energy difference of the levels according to the Planck relation: , where is the Planck constant. Other points are: Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons. According to the Maxwell theory the frequency of classical radiation is equal to the rotation frequency rot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels and when is much smaller than . These jumps reproduce the frequency of the -th harmonic of orbit . For sufficiently large values of (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small (or large ), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers. The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average. Bohr's condition, that the angular momentum be an integer multiple of , was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit: According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is which implies that or where is the angular momentum of the orbiting electron. Writing for this angular momentum, the previous equation becomes which is Bohr's second postulate. Bohr described angular momentum of the electron orbit as while de Broglie's wavelength of described divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected. In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge. Electron energy levels The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons. Calculation of the orbits requires two assumptions. Classical mechanics The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force. where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius: It also determines the electron's total energy at any radius: The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem. A quantum rule The angular momentum is an integer multiple of ħ: Derivation In classical mechanics, if an electron is orbiting around an atom with period T, and if its coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, it will emit electromagnetic radiation in a pattern repeating at every period, so that the Fourier transform of the pattern will only have frequencies which are multiples of 1/T. However, in quantum mechanics, the quantization of angular momentum leads to discrete energy levels of the orbits, and the emitted frequencies are quantized according to the energy differences between these levels. This discrete nature of energy levels introduces a fundamental departure from the classical radiation law, giving rise to distinct spectral lines in the emitted radiation. Bohr assumes that the electron is circling the nucleus in an elliptical orbit obeying the rules of classical mechanics, but with no loss of radiation due to the Larmor formula. Denoting the total energy as E, the negative electron charge as e, the positive nucleus charge as K=Z|e|, the electron mass as me, half the major axis of the ellipse as a, he starts with these equations: E is assumed to be negative, because a positive energy is required to unbind the electron from the nucleus and put it at rest at an infinite distance. Eq. (1a) is obtained from equating the centripetal force to the Coulombian force acting between the nucleus and the electron, considering that (where T is the average kinetic energy and U the average electrostatic potential), and that for Kepler's second law, the average separation between the electron and the nucleus is a. Eq. (1b) is obtained from the same premises of eq. (1a) plus the virial theorem, stating that, for an elliptical orbit, Then Bohr assumes that is an integer multiple of the energy of a quantum of light with half the frequency of the electron's revolution frequency, i.e.: From eq. (1a,1b,2), it descends: He further assumes that the orbit is circular, i.e. , and, denoting the angular momentum of the electron as L, introduces the equation: Eq. (4) stems from the virial theorem, and from the classical mechanics relationships between the angular momentum, the kinetic energy and the frequency of revolution. From eq. (1c,2,4), it stems: where: that is: This results states that the angular momentum of the electron is an integer multiple of the reduced Planck constant. Substituting the expression for the velocity gives an equation for r in terms of n: so that the allowed orbit radius at any n is The smallest possible value of r in the hydrogen atom () is called the Bohr radius and is equal to: The energy of the n-th level for any atom is determined by the radius and quantum number: An electron in the lowest energy level of hydrogen () therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level () is −3.4 eV. The third (3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product. The combination of natural constants in the energy formula is called the Rydberg energy (RE): This expression is clarified by interpreting it in combinations that form more natural units: is the rest mass energy of the electron (511 keV), is the fine-structure constant, . Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge , where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation): The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb force. When Z = 1/α (), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei. The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron, However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1+1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4. For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus. (positronium). Rydberg formula Beginning in late 1860s, Johann Balmer and later Johannes Rydberg and Walther Ritz developed increasingly accurate empirical formula matching measured atomic spectral lines. Critical for Bohr's later work, Rydberg expressed his formula in terms of wave-number, equivalent to frequency. These formula contained a constant, , now known the Rydberg constant and a pair of integers indexing the lines: Despite many attempts, no theory of the atom could reproduce these relatively simple formula. In Bohr's theory describing the energies of transitions or quantum jumps between orbital energy levels is able to explain these formula. For the hydrogen atom Bohr starts with his derived formula for the energy released as a free electron moves into a stable circular orbit indexed by : The energy difference between two such levels is then: Therefore, Bohr's theory gives the Rydberg formula and moreover the numerical value the Rydberg constant for hydrogen in terms of more fundamental constants of nature, including the electron's charge, the electron's mass, and the Planck constant: Since the energy of a photon is these results can be expressed in terms of the wavelength of the photon given off: Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman ( =1), Balmer ( =2), and Paschen ( =3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted. To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing with or with where is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model. Shell model (heavier atoms) Bohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, "rings" in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: "We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8". However, in larger atoms the innermost shell would contain eight electrons, "on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur". Bohr wrote "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:" In Bohr's third 1913 paper Part III called "Systems Containing Several Nuclei", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium. In 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury Bohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: "shells". Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit. This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit. For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer. The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas). In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment. Moseley's law and calculation (K-alpha X-ray emission lines) Niels Bohr said in 1962: "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley." In 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number . Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to . Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation. It was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of "cells" which could each only contain two electrons each, and these were arranged in "equidistant layers". In the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines, or Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28 x 1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913. The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation. Shortcomings The Bohr model gives an incorrect value for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to revolve "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction. In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics). The Bohr model also failed to explain: Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom. The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect). The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin. The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields. Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together. Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium. Refinements Several enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition where pr is the radial momentum canonically conjugate to the coordinate qr, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants. The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could have any orientation relative to the coordinates, without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926. However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron. The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. Bohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable "closed shells". Model of the chemical bond Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other. Symbolism of planetary atomic models Although Bohr's atomic model was superseded by quantum models in the 1920s, the visual image of electrons orbiting a nucleus has remained the popular concept of atoms. The concept of an atom as a tiny planetary system has been widely used as a symbol for atoms and even for "atomic" energy (even though this is more properly considered nuclear energy). Examples of its use over the past century include but are not limited to: The logo of the United States Atomic Energy Commission, which was in part responsible for its later usage in relation to nuclear fission technology in particular. The flag of the International Atomic Energy Agency is a "crest-and-spinning-atom emblem", enclosed in olive branches. The US minor league baseball Albuquerque Isotopes' logo shows baseballs as electrons orbiting a large letter "A". A similar symbol, the atomic whirl, was chosen as the symbol for the American Atheists, and has come to be used as a symbol of atheism in general. The Unicode Miscellaneous Symbols code point U+269B (⚛) for an atom looks like a planetary atom model. The television show The Big Bang Theory uses a planetary-like image in its print logo. The JavaScript library React uses planetary-like image as its logo. On maps, it is generally used to indicate a nuclear power installation. See also 1913 in science Balmer's Constant Bohr–Sommerfeld model The Franck–Hertz experiment provided early support for the Bohr model. The inert-pair effect is adequately explained by means of the Bohr model. Introduction to quantum mechanics References Footnotes Primary sources Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p. 434. (provides an elegant reformulation of the Bohr–Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.) Further reading Reprint: Klaus Hentschel: Elektronenbahnen, Quantensprünge und Spektren, in: Charlotte Bigg & Jochen Hennig (eds.) Atombilder. Ikonografien des Atoms in Wissenschaft und Öffentlichkeit des 20. Jahrhunderts, Göttingen: Wallstein-Verlag 2009, pp. 51–61 External links Standing waves in Bohr's atomic model—An interactive simulation to intuitively explain the quantization condition of standing waves in Bohr's atomic mode 1913 in science Atomic physics Foundational quantum physics Hydrogen physics Niels Bohr Old quantum theory
Bohr model
[ "Physics", "Chemistry" ]
9,410
[ "Foundational quantum physics", "Quantum mechanics", "Old quantum theory", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
4,864
https://en.wikipedia.org/wiki/Bucket%20argument
Isaac Newton's rotating bucket argument (also known as Newton's bucket) is a thought experiment that was designed to demonstrate that true rotational motion cannot be defined as the relative rotation of the body with respect to the immediately surrounding bodies. It is one of five arguments from the "properties, causes, and effects" of "true motion and rest" that support his contention that, in general, true motion and rest cannot be defined as special instances of motion or rest relative to other bodies, but instead can be defined only by reference to absolute space. Alternatively, these experiments provide an operational definition of what is meant by "absolute rotation", and do not pretend to address the question of "rotation relative to what?" General relativity dispenses with absolute space and with physics whose cause is external to the system, with the concept of geodesics of spacetime. Background These arguments, and a discussion of the distinctions between absolute and relative time, space, place and motion, appear in a scholium at the end of Definitions sections in Book I of Newton's work, The Mathematical Principles of Natural Philosophy (1687) (not to be confused with General Scholium at the end of Book III), which established the foundations of classical mechanics and introduced his law of universal gravitation, which yielded the first quantitatively adequate dynamical explanation of planetary motion. Despite their embrace of the principle of rectilinear inertia and the recognition of the kinematical relativity of apparent motion (which underlies whether the Ptolemaic or the Copernican system is correct), natural philosophers of the seventeenth century continued to consider true motion and rest as physically separate descriptors of an individual body. The dominant view Newton opposed was devised by René Descartes, and was supported (in part) by Gottfried Leibniz. It held that empty space is a metaphysical impossibility because space is nothing other than the extension of matter, or, in other words, that when one speaks of the space between things one is actually making reference to the relationship that exists between those things and not to some entity that stands between them. Concordant with the above understanding, any assertion about the motion of a body boils down to a description over time in which the body under consideration is at t1 found in the vicinity of one group of "landmark" bodies and at some t2 is found in the vicinity of some other "landmark" body or bodies. Descartes recognized that there would be a real difference, however, between a situation in which a body with movable parts and originally at rest with respect to a surrounding ring was itself accelerated to a certain angular velocity with respect to the ring, and another situation in which the surrounding ring were given a contrary acceleration with respect to the central object. With sole regard to the central object and the surrounding ring, the motions would be indistinguishable from each other assuming that both the central object and the surrounding ring were absolutely rigid objects. However, if neither the central object nor the surrounding ring were absolutely rigid then the parts of one or both of them would tend to fly out from the axis of rotation. For contingent reasons having to do with the Inquisition, Descartes spoke of motion as both absolute and relative. By the late 19th century, the contention that all motion is relative was re-introduced, notably by Ernst Mach (1883). The argument Newton discusses a bucket () filled with water hung by a cord. If the cord is twisted up tightly on itself and then the bucket is released, it begins to spin rapidly, not only with respect to the experimenter, but also in relation to the water it contains. (This situation would correspond to diagram B above.) Although the relative motion at this stage is the greatest, the surface of the water remains flat, indicating that the parts of the water have no tendency to recede from the axis of relative motion, despite proximity to the pail. Eventually, as the cord continues to unwind, the surface of the water assumes a concave shape as it acquires the motion of the bucket spinning relative to the experimenter. This concave shape shows that the water is rotating, despite the fact that the water is at rest relative to the pail. In other words, it is not the relative motion of the pail and water that causes concavity of the water, contrary to the idea that motions can only be relative, and that there is no absolute motion. (This situation would correspond to diagram D.) Possibly the concavity of the water shows rotation relative to something else: say absolute space? Newton says: "One can find out and measure the true and absolute circular motion of the water". In the 1846 Andrew Motte translation of Newton's words: The argument that the motion is absolute, not relative, is incomplete, as it limits the participants relevant to the experiment to only the pail and the water, a limitation that has not been established. In fact, the concavity of the water clearly involves gravitational attraction, and by implication the Earth also is a participant. Here is a critique due to Mach arguing that only relative motion is established: The degree in which Mach's hypothesis is integrated in general relativity is discussed in the article Mach's principle; it is generally held that general relativity is not entirely Machian. All observers agree that the surface of rotating water is curved. However, the explanation of this curvature involves centrifugal force for all observers with the exception of a truly stationary observer, who finds the curvature is consistent with the rate of rotation of the water as they observe it, with no need for an additional centrifugal force. Thus, a stationary frame can be identified, and it is not necessary to ask "Stationary with respect to what?": A supplementary thought experiment with the same objective of determining the occurrence of absolute rotation also was proposed by Newton: the example of observing two identical spheres in rotation about their center of gravity and tied together by a string. Occurrence of tension in the string is indicative of absolute rotation; see Rotating spheres. Detailed analysis The historic interest of the rotating bucket experiment is its usefulness in suggesting one can detect absolute rotation by observation of the shape of the surface of the water. However, one might question just how rotation brings about this change. Below are two approaches to understanding the concavity of the surface of rotating water in a bucket. Newton's laws of motion The shape of the surface of a rotating liquid in a bucket can be determined using Newton's laws for the various forces on an element of the surface. For example, see Knudsen and Hjorth. The analysis begins with the free body diagram in the co-rotating frame where the water appears stationary. The height of the water h = h(r) is a function of the radial distance r from the axis of rotation Ω, and the aim is to determine this function. An element of water volume on the surface is shown to be subject to three forces: the vertical force due to gravity Fg, the horizontal, radially outward centrifugal force FCfgl, and the force normal to the surface of the water Fn due to the rest of the water surrounding the selected element of surface. The force due to surrounding water is known to be normal to the surface of the water because a liquid in equilibrium cannot support shear stresses. To quote Anthony and Brackett: Moreover, because the element of water does not move, the sum of all three forces must be zero. To sum to zero, the force of the water must point oppositely to the sum of the centrifugal and gravity forces, which means the surface of the water must adjust so its normal points in this direction. (A very similar problem is the design of a banked turn, where the slope of the turn is set so a car will not slide off the road. The analogy in the case of rotating bucket is that the element of water surface will "slide" up or down the surface unless the normal to the surface aligns with the vector resultant formed by the vector addition Fg + FCfgl.) As r increases, the centrifugal force increases according to the relation (the equations are written per unit mass): where Ω is the constant rate of rotation of the water. The gravitational force is unchanged at where g is the acceleration due to gravity. These two forces add to make a resultant at an angle φ from the vertical given by which clearly becomes larger as r increases. To ensure that this resultant is normal to the surface of the water, and therefore can be effectively nulled by the force of the water beneath, the normal to the surface must have the same angle, that is, leading to the ordinary differential equation for the shape of the surface: or, integrating: where h(0) is the height of the water at r = 0. In other words, the surface of the water is parabolic in its dependence upon the radius. Potential energy The shape of the water's surface can be found in a different, very intuitive way using the interesting idea of the potential energy associated with the centrifugal force in the co-rotating frame. In a reference frame uniformly rotating at angular rate Ω, the fictitious centrifugal force is conservative and has a potential energy of the form: where r is the radius from the axis of rotation. This result can be verified by taking the gradient of the potential to obtain the radially outward force:   The meaning of the potential energy (stored work) is that movement of a test body from a larger radius to a smaller radius involves doing work against the centrifugal force and thus gaining potential energy. But this test body at the smaller radius where its elevation is lower has now lost equivalent gravitational potential energy. Potential energy therefore explains the concavity of the water surface in a rotating bucket. Notice that at equilibrium the surface adopts a shape such that an element of volume at any location on its surface has the same potential energy as at any other. That being so, no element of water on the surface has any incentive to move position, because all positions are equivalent in energy. That is, equilibrium is attained. On the other hand, were surface regions with lower energy available, the water occupying surface locations of higher potential energy would move to occupy these positions of lower energy, inasmuch as there is no barrier to lateral movement in an ideal liquid. We might imagine deliberately upsetting this equilibrium situation by somehow momentarily altering the surface shape of the water to make it different from an equal-energy surface. This change in shape would not be stable, and the water would not stay in our artificially contrived shape, but engage in a transient exploration of many shapes until non-ideal frictional forces introduced by sloshing, either against the sides of the bucket or by the non-ideal nature of the liquid, killed the oscillations and the water settled down to the equilibrium shape. To see the principle of an equal-energy surface at work, imagine gradually increasing the rate of rotation of the bucket from zero. The water surface is flat at first, and clearly a surface of equal potential energy because all points on the surface are at the same height in the gravitational field acting upon the water. At some small angular rate of rotation, however, an element of surface water can achieve lower potential energy by moving outward under the influence of the centrifugal force; think of an object moving with the force of gravity closer to the Earth's center: the object lowers its potential energy by complying with a force. Because water is incompressible and must remain within the confines of the bucket, this outward movement increases the depth of water at the larger radius, increasing the height of the surface at larger radius, and lowering it at smaller radius. The surface of the water becomes slightly concave, with the consequence that the potential energy of the water at the greater radius is increased by the work done against gravity to achieve the greater height. As the height of water increases, movement toward the periphery becomes no longer advantageous, because the reduction in potential energy from working with the centrifugal force is balanced against the increase in energy working against gravity. Thus, at a given angular rate of rotation, a concave surface represents the stable situation, and the more rapid the rotation, the more concave this surface. If rotation is arrested, the energy stored in fashioning the concave surface must be dissipated, for example through friction, before an equilibrium flat surface is restored. To implement a surface of constant potential energy quantitatively, let the height of the water be : then the potential energy per unit mass contributed by gravity is and the total potential energy per unit mass on the surface is with the background energy level independent of r. In a static situation (no motion of the fluid in the rotating frame), this energy is constant independent of position r. Requiring the energy to be constant, we obtain the parabolic form: where h(0) is the height at r = 0 (the axis). See Figures 1 and 2. The principle of operation of the centrifuge also can be simply understood in terms of this expression for the potential energy, which shows that it is favorable energetically when the volume far from the axis of rotation is occupied by the heavier substance. See also Centrifugal force Inertial frame of reference Mach's principle Philosophy of space and time: Absolutism vs. relationalism Rotating reference frame Rotating spheres Rotational gravity Sagnac effect References Further reading The isotropy of the cosmic microwave background radiation is another indicator that the universe does not rotate. See: External links Newton's Views on Space, Time, and Motion from Stanford Encyclopedia of Philosophy, article by Robert Rynasiewicz. At the end of this article, loss of fine distinctions in the translations as compared to the original Latin text is discussed. Life and Philosophy of Leibniz see section on Space, Time and Indiscernibles for Leibniz arguing against the idea of space acting as a causal agent. Newton's Bucket An interactive applet illustrating the water shape, and an attached PDF file with a mathematical derivation of a more complete water-shape model than is given in this article. Classical mechanics Isaac Newton Thought experiments in physics Rotation
Bucket argument
[ "Physics" ]
2,917
[ "Physical phenomena", "Classical mechanics", "Rotation", "Motion (physics)", "Mechanics" ]
4,882
https://en.wikipedia.org/wiki/Background%20radiation
Background radiation is a measure of the level of ionizing radiation present in the environment at a particular location which is due to deliberate introduction of radiation sources. Background radiation originates from a variety of sources, both natural and artificial. These include both cosmic radiation and environmental radioactivity from naturally occurring radioactive materials (such as radon and radium), as well as man-made medical X-rays, fallout from nuclear weapons testing and nuclear accidents. Definition Background radiation is defined by the International Atomic Energy Agency as "Dose or the dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified. A distinction is thus made between the dose which is already in a location, which is defined here as being "background", and the dose due to a deliberately introduced and specified source. This is important where radiation measurements are taken of a specified radiation source, where the existing background may affect this measurement. An example would be measurement of radioactive contamination in a gamma radiation background, which could increase the total reading above that expected from the contamination alone. However, if no radiation source is specified as being of concern, then the total radiation dose measurement at a location is generally called the background radiation, and this is usually the case where an ambient dose rate is measured for environmental purposes. Background dose rate examples Background radiation varies with location and time, and the following table gives examples: Natural background radiation Radioactive material is found throughout nature. Detectable amounts occur naturally in soil, rocks, water, air, and vegetation, from which it is inhaled and ingested into the body. In addition to this internal exposure, humans also receive external exposure from radioactive materials that remain outside the body and from cosmic radiation from space. The worldwide average natural dose to humans is about per year. This is four times the worldwide average artificial radiation exposure, which in 2008 amounted to about per year. In some developed countries, like the US and Japan, artificial exposure is, on average, greater than the natural exposure, due to greater access to medical imaging. In Europe, average natural background exposure by country ranges from under annually in the United Kingdom to more than annually for some groups of people in Finland. The International Atomic Energy Agency states: "Exposure to radiation from natural sources is an inescapable feature of everyday life in both working and public environments. This exposure is in most cases of little or no concern to society, but in certain situations the introduction of health protection measures needs to be considered, for example when working with uranium and thorium ores and other Naturally Occurring Radioactive Material (NORM). These situations have become the focus of greater attention by the Agency in recent years." Terrestrial sources Terrestrial background radiation, for the purpose of the table above, only includes sources that remain external to the body. The major radionuclides of concern are potassium, uranium and thorium and their decay products, some of which, like radium and radon are intensely radioactive but occur in low concentrations. Most of these sources have been decreasing, due to radioactive decay since the formation of the Earth, because there is no significant amount currently transported to the Earth. Thus, the present activity on Earth from uranium-238 is only half as much as it originally was because of its 4.5 billion year half-life, and potassium-40 (half-life 1.25 billion years) is only at about 8% of original activity. But during the time that humans have existed the amount of radiation has decreased very little. Many shorter half-life (and thus more intensely radioactive) isotopes have not decayed out of the terrestrial environment because of their on-going natural production. Examples of these are radium-226 (decay product of thorium-230 in decay chain of uranium-238) and radon-222 (a decay product of radium-226 in said chain). Thorium and uranium (and their daughters) primarily undergo alpha and beta decay, and are not easily detectable. However, many of their daughter products are strong gamma emitters. Thorium-232 is detectable via a 239 keV peak from lead-212, 511, 583 and 2614 keV from thallium-208, and 911 and 969 keV from actinium-228. Uranium-238 manifests as 609, 1120, and 1764 keV peaks of bismuth-214 (cf. the same peak for atmospheric radon). Potassium-40 is detectable directly via its 1461 keV gamma peak. The level over the sea and other large bodies of water tends to be about a tenth of the terrestrial background. Conversely, coastal areas (and areas by the side of fresh water) may have an additional contribution from dispersed sediment. Airborne sources The biggest source of natural background radiation is airborne radon, a radioactive gas that emanates from the ground. Radon and its isotopes, parent radionuclides, and decay products all contribute to an average inhaled dose of 1.26 mSv/a (millisievert per year). Radon is unevenly distributed and varies with weather, such that much higher doses apply to many areas of the world, where it represents a significant health hazard. Concentrations over 500 times the world average have been found inside buildings in Scandinavia, the United States, Iran, and the Czech Republic. Radon is a decay product of uranium, which is relatively common in the Earth's crust, but more concentrated in ore-bearing rocks scattered around the world. Radon seeps out of these ores into the atmosphere or into ground water or infiltrates into buildings. It can be inhaled into the lungs, along with its decay products, where they will reside for a period of time after exposure. Although radon is naturally occurring, exposure can be enhanced or diminished by human activity, notably house construction. A poorly sealed dwelling floor, or poor basement ventilation, in an otherwise well insulated house can result in the accumulation of radon within the dwelling, exposing its residents to high concentrations. The widespread construction of well insulated and sealed homes in the northern industrialized world has led to radon becoming the primary source of background radiation in some localities in northern North America and Europe. Basement sealing and suction ventilation reduce exposure. Some building materials, for example lightweight concrete with alum shale, phosphogypsum and Italian tuff, may emanate radon if they contain radium and are porous to gas. Radiation exposure from radon is indirect. Radon has a short half-life (4 days) and decays into other solid particulate radium-series radioactive nuclides. These radioactive particles are inhaled and remain lodged in the lungs, causing continued exposure. Radon is thus assumed to be the second leading cause of lung cancer after smoking, and accounts for 15,000 to 22,000 cancer deaths per year in the US alone. However, the discussion about the opposite experimental results is still going on. About 100,000 Bq/m3 of radon was found in Stanley Watras's basement in 1984. He and his neighbours in Boyertown, Pennsylvania, United States may hold the record for the most radioactive dwellings in the world. International radiation protection organizations estimate that a committed dose may be calculated by multiplying the equilibrium equivalent concentration (EEC) of radon by a factor of 8 to 9 and the EEC of thoron by a factor of 40 . Most of the atmospheric background is caused by radon and its decay products. The gamma spectrum shows prominent peaks at 609, 1120, and 1764 keV, belonging to bismuth-214, a radon decay product. The atmospheric background varies greatly with wind direction and meteorological conditions. Radon also can be released from the ground in bursts and then form "radon clouds" capable of traveling tens of kilometers. Cosmic radiation The Earth and all living things on it are constantly bombarded by radiation from outer space. This radiation primarily consists of positively charged ions from protons to iron and larger nuclei derived from outside the Solar System. This radiation interacts with atoms in the atmosphere to create an air shower of secondary radiation, including X-rays, muons, protons, alpha particles, pions, electrons, and neutrons. The immediate dose from cosmic radiation is largely from muons, neutrons, and electrons, and this dose varies in different parts of the world based largely on the geomagnetic field and altitude. For example, the city of Denver in the United States (at 1650 meters elevation) receives a cosmic ray dose roughly twice that of a location at sea level. This radiation is much more intense in the upper troposphere, around 10 km altitude, and is thus of particular concern for airline crews and frequent passengers, who spend many hours per year in this environment. During their flights airline crews typically get an additional occupational dose between per year and 2.19 mSv/year, according to various studies. Similarly, cosmic rays cause higher background exposure in astronauts than in humans on the surface of Earth. Astronauts in low orbits, such as in the International Space Station or the Space Shuttle, are partially shielded by the magnetic field of the Earth, but also suffer from the Van Allen radiation belt which accumulates cosmic rays and results from the Earth's magnetic field. Outside low Earth orbit, as experienced by the Apollo astronauts who traveled to the Moon, this background radiation is much more intense, and represents a considerable obstacle to potential future long term human exploration of the Moon or Mars. Cosmic rays also cause elemental transmutation in the atmosphere, in which secondary radiation generated by the cosmic rays combines with atomic nuclei in the atmosphere to generate different nuclides. Many so-called cosmogenic nuclides can be produced, but probably the most notable is carbon-14, which is produced by interactions with nitrogen atoms. These cosmogenic nuclides eventually reach the Earth's surface and can be incorporated into living organisms. The production of these nuclides varies slightly with short-term variations in solar cosmic ray flux, but is considered practically constant over long scales of thousands to millions of years. The constant production, incorporation into organisms and relatively short half-life of carbon-14 are the principles used in radiocarbon dating of ancient biological materials, such as wooden artifacts or human remains. The cosmic radiation at sea level usually manifests as 511 keV gamma rays from annihilation of positrons created by nuclear reactions of high energy particles and gamma rays. At higher altitudes there is also the contribution of continuous bremsstrahlung spectrum. Food and water Two of the essential elements that make up the human body, namely potassium and carbon, have radioactive isotopes that add significantly to our background radiation dose. An average human contains about 17 milligrams of potassium-40 (40K) and about 24 nanograms (10−9 g) of carbon-14 (14C), (half-life 5,730 years). Excluding internal contamination by external radioactive material, these two are the largest components of internal radiation exposure from biologically functional components of the human body. About 4,000 nuclei of 40K decay per second, and a similar number of 14C. The energy of beta particles produced by 40K is about 10 times that from the beta particles from 14C decay. 14C is present in the human body at a level of about 3700 Bq (0.1 μCi) with a biological half-life of 40 days. This means there are about 3700 beta particles per second produced by the decay of 14C. However, a 14C atom is in the genetic information of about half the cells, while potassium is not a component of DNA. The decay of a 14C atom inside DNA in one person happens about 50 times per second, changing a carbon atom to one of nitrogen. The global average internal dose from radionuclides other than radon and its decay products is 0.29 mSv/a, of which 0.17 mSv/a comes from 40K, 0.12 mSv/a comes from the uranium and thorium series, and 12 μSv/a comes from 14C. Areas with high natural background radiation Some areas have greater dosage than the country-wide averages. In the world in general, exceptionally high natural background locales include Ramsar in Iran, Guarapari in Brazil, Karunagappalli in India, Arkaroola in Australia and Yangjiang in China. The highest level of purely natural radiation ever recorded on the Earth's surface was 90 μGy/h on a Brazilian black beach (areia preta in Portuguese) composed of monazite. This rate would convert to 0.8 Gy/a for year-round continuous exposure, but in fact the levels vary seasonally and are much lower in the nearest residences. The record measurement has not been duplicated and is omitted from UNSCEAR's latest reports. Nearby tourist beaches in Guarapari and Cumuruxatiba were later evaluated at 14 and 15 μGy/h. Note that the values quoted here are in Grays. To convert to Sieverts (Sv) a radiation weighting factor is required; these weighting factors vary from 1 (beta & gamma) to 20 (alpha particles). The highest background radiation in an inhabited area is found in Ramsar, primarily due to the use of local naturally radioactive limestone as a building material. The 1000 most exposed residents receive an average external effective radiation dose of per year, six times the ICRP recommended limit for exposure to the public from artificial sources. They additionally receive a substantial internal dose from radon. Record radiation levels were found in a house where the effective dose due to ambient radiation fields was per year, and the internal committed dose from radon was per year. This unique case is over 80 times higher than the world average natural human exposure to radiation. Epidemiological studies are underway to identify health effects associated with the high radiation levels in Ramsar. It is much too early to draw unambiguous statistically significant conclusions. While so far support for beneficial effects of chronic radiation (like longer lifespan) has been observed in few places only, a protective and adaptive effect is suggested by at least one study whose authors nonetheless caution that data from Ramsar are not yet sufficiently strong to relax existing regulatory dose limits. However, the recent statistical analyses discussed that there is no correlation between the risk of negative health effects and elevated level of natural background radiation. Photoelectric Background radiation doses in the immediate vicinity of particles of high atomic number materials, within the human body, have a small enhancement due to the photoelectric effect. Neutron background Most of the natural neutron background is a product of cosmic rays interacting with the atmosphere. The neutron energy peaks at around 1 MeV and rapidly drops above. At sea level, the production of neutrons is about 20 neutrons per second per kilogram of material interacting with the cosmic rays (or, about 100–300 neutrons per square meter per second). The flux is dependent on geomagnetic latitude, with a maximum near the magnetic poles. At solar minimums, due to lower solar magnetic field shielding, the flux is about twice as high vs the solar maximum. It also dramatically increases during solar flares. In the vicinity of larger heavier objects, e.g. buildings or ships, the neutron flux measures higher; this is known as "cosmic ray induced neutron signature", or "ship effect" as it was first detected with ships at sea. Artificial background radiation Atmospheric nuclear testing Frequent above-ground nuclear explosions between the 1940s and 1960s scattered a substantial amount of radioactive contamination. Some of this contamination is local, rendering the immediate surroundings highly radioactive, while some of it is carried longer distances as nuclear fallout; some of this material is dispersed worldwide. The increase in background radiation due to these tests peaked in 1963 at about 0.15 mSv per year worldwide, or about 7% of average background dose from all sources. The Limited Test Ban Treaty of 1963 prohibited above-ground tests, thus by the year 2000 the worldwide dose from these tests has decreased to only 0.005 mSv per year. This global fallout has caused up to 2.4 million deaths by 2020. Occupational exposure The International Commission on Radiological Protection recommends limiting occupational radiation exposure to 50 mSv (5 rem) per year, and 100 mSv (10 rem) in 5 years. However, background radiation for occupational doses includes radiation that is not measured by radiation dose instruments in potential occupational exposure conditions. This includes both offsite "natural background radiation" and any medical radiation doses. This value is not typically measured or known from surveys, such that variations in the total dose to individual workers is not known. This can be a significant confounding factor in assessing radiation exposure effects in a population of workers who may have significantly different natural background and medical radiation doses. This is most significant when the occupational doses are very low. At an IAEA conference in 2002, it was recommended that occupational doses below 1–2 mSv per year do not warrant regulatory scrutiny. Nuclear accidents Under normal circumstances, nuclear reactors release small amounts of radioactive gases, which cause small radiation exposures to the public. Events classified on the International Nuclear Event Scale as incidents typically do not release any additional radioactive substances into the environment. Large releases of radioactivity from nuclear reactors are extremely rare. To the present day, there were two major civilian accidents – the Chernobyl accident and the Fukushima I nuclear accidents – which caused substantial contamination. The Chernobyl accident was the only one to cause immediate deaths. Total doses from the Chernobyl accident ranged from 10 to 50 mSv over 20 years for the inhabitants of the affected areas, with most of the dose received in the first years after the disaster, and over 100 mSv for liquidators. There were 28 deaths from acute radiation syndrome. Total doses from the Fukushima I accidents were between 1 and 15 mSv for the inhabitants of the affected areas. Thyroid doses for children were below 50 mSv. 167 cleanup workers received doses above 100 mSv, with 6 of them receiving more than 250 mSv (the Japanese exposure limit for emergency response workers). The average dose from the Three Mile Island accident was 0.01 mSv. Non-civilian: In addition to the civilian accidents described above, several accidents at early nuclear weapons facilities – such as the Windscale fire, the contamination of the Techa River by the nuclear waste from the Mayak compound, and the Kyshtym disaster at the same compound – released substantial radioactivity into the environment. The Windscale fire resulted in thyroid doses of 5–20 mSv for adults and 10–60 mSv for children. The doses from the accidents at Mayak are unknown. Nuclear fuel cycle The Nuclear Regulatory Commission, the United States Environmental Protection Agency, and other U.S. and international agencies, require that licensees limit radiation exposure to individual members of the public to 1 mSv (100 mrem) per year. Energy sources Per UNECE life-cycle assessment, nearly all sources of energy result in some level of occupational and public exposure to radionuclides as result of their manufacturing or operations. The following table uses man·Sievert/GW-annum: Coal burning Coal plants emit radiation in the form of radioactive fly ash which is inhaled and ingested by neighbours, and incorporated into crops. A 1978 paper from Oak Ridge National Laboratory estimated that coal-fired power plants of that time may contribute a whole-body committed dose of 19 μSv/a to their immediate neighbours in a radius of 500 m. The United Nations Scientific Committee on the Effects of Atomic Radiation's 1988 report estimated the committed dose 1 km away to be 20 μSv/a for older plants or 1 μSv/a for newer plants with improved fly ash capture, but was unable to confirm these numbers by test. When coal is burned, uranium, thorium and all the uranium daughters accumulated by disintegration – radium, radon, polonium – are released. Radioactive materials previously buried underground in coal deposits are released as fly ash or, if fly ash is captured, may be incorporated into concrete manufactured with fly ash. Other sources of dose uptake Medical The global average human exposure to artificial radiation is 0.6 mSv/a, primarily from medical imaging. This medical component can range much higher, with an average of 3 mSv per year across the USA population. Other human contributors include smoking, air travel, radioactive building materials, historical nuclear weapons testing, nuclear power accidents and nuclear industry operation. A typical chest x-ray delivers 20 μSv (2 mrem) of effective dose. A dental x-ray delivers a dose of 5 to 10 μSv. A CT scan delivers an effective dose to the whole body ranging from 1 to 20 mSv (100 to 2000 mrem). The average American receives about 3 mSv of diagnostic medical dose per year; countries with the lowest levels of health care receive almost none. Radiation treatment for various diseases also accounts for some dose, both in individuals and in those around them. Consumer items Cigarettes contain polonium-210, originating from the decay products of radon, which stick to tobacco leaves. Heavy smoking results in a radiation dose of 160 mSv/year to localized spots at the bifurcations of segmental bronchi in the lungs from the decay of polonium-210. This dose is not readily comparable to the radiation protection limits, since the latter deal with whole body doses, while the dose from smoking is delivered to a very small portion of the body. Radiation metrology In a radiation metrology laboratory, background radiation refers to the measured value from any incidental sources that affect an instrument when a specific radiation source sample is being measured. This background contribution, which is established as a stable value by multiple measurements, usually before and after sample measurement, is subtracted from the rate measured when the sample is being measured. This is in accordance with the International Atomic Energy Agency definition of background as being "Dose or dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified. The same issue occurs with radiation protection instruments, where a reading from an instrument may be affected by the background radiation. An example of this is a scintillation detector used for surface contamination monitoring. In an elevated gamma background the scintillator material will be affected by the background gamma, which will add to the reading obtained from any contamination which is being monitored. In extreme cases it will make the instrument unusable as the background swamps the lower level of radiation from the contamination. In such instruments the background can be continually monitored in the "Ready" state, and subtracted from any reading obtained when being used in "Measuring" mode. Regular Radiation measurement is carried out at multiple levels. Government agencies compile radiation readings as part of environmental monitoring mandates, often making the readings available to the public and sometimes in near-real-time. Collaborative groups and private individuals may also make real-time readings available to the public. Instruments used for radiation measurement include the Geiger–Müller tube and the Scintillation detector. The former is usually more compact and affordable and reacts to several radiation types, while the latter is more complex and can detect specific radiation energies and types. Readings indicate radiation levels from all sources including background, and real-time readings are in general unvalidated, but correlation between independent detectors increases confidence in measured levels. List of near-real-time government radiation measurement sites, employing multiple instrument types: Europe and Canada: European Radiological Data Exchange Platform (EURDEP) Simple map of Gamma Dose Rates USA: EPA Radnet near-real-time and laboratory data by state List of international near-real-time collaborative/private measurement sites, employing primarily Geiger-Muller detectors: GMC map: http://www.gmcmap.com/ (mix of old-data detector stations and some near-real-time ones) Netc: http://www.netc.com/ Radmon: http://www.radmon.org/ Radiation Network: http://radiationnetwork.com/ Radioactive@Home: http://radioactiveathome.org/map/ Safecast: http://safecast.org/tilemap (the green circles are real-time detectors) uRad Monitor: http://www.uradmonitor.com/ See also Background radiation equivalent time (BRET) Banana equivalent dose Environmental radioactivity Flight-time equivalent dose Noise (electronics) Low-background steel References External links Background radiation description from the Radiation Effects Research Foundation Environmental and Background Radiation FAQ from the Health Physics Society Radiation Dose Chart from the American Nuclear Society Radiation Dose Calculator from the United States Environmental Protection Agency Cosmic rays Ionizing radiation Radioactivity
Background radiation
[ "Physics", "Chemistry" ]
5,107
[ "Ionizing radiation", "Physical phenomena", "Cosmic rays", "Astrophysics", "Radiation", "Nuclear physics", "Radioactivity" ]
4,944
https://en.wikipedia.org/wiki/Naive%20set%20theory
Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics. Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics. Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments. Method A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. Such theory treats sets as platonic absolute objects. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself. The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik. Naive set theory may refer to several very distinct notions. It may refer to Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos. Early or later versions of Georg Cantor's theory and other informal systems. Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind. Paradoxes The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Russell's paradox: there is no set consisting of "all sets that do not contain themselves". Thus consistent systems of naive set theory must include some limitations on the principles which can be used to form sets. Cantor's theory Some believe that Georg Cantor's set theory was not actually implicated in the set-theoretic paradoxes (see Frápolli 1991). One difficulty in determining this with certainty is that Cantor did not provide an axiomatization of his system. By 1899, Cantor was aware of some of the paradoxes following from unrestricted interpretation of his theory, for instance Cantor's paradox and the Burali-Forti paradox, and did not believe that they discredited his theory. Cantor's paradox can actually be derived from the above (false) assumption—that any property may be used to form a set—using for " is a cardinal number". Frege explicitly axiomatized a theory in which a formalized version of naive set theory can be interpreted, and it is this formal theory which Bertrand Russell actually addressed when he presented his paradox, not necessarily a theory Cantorwho, as mentioned, was aware of several paradoxespresumably had in mind. Axiomatic theories Axiomatic set theory was developed in response to these early attempts to understand sets, with the goal of determining precisely what operations were allowed and when. Consistency A naive set theory is not necessarily inconsistent, if it correctly specifies the sets allowed to be considered. This can be done by the means of definitions, which are implicit axioms. It is possible to state all the axioms explicitly, as in the case of Halmos' Naive Set Theory, which is actually an informal presentation of the usual axiomatic Zermelo–Fraenkel set theory. It is "naive" in that the language and notations are those of ordinary informal mathematics, and in that it does not deal with consistency or completeness of the axiom system. Likewise, an axiomatic set theory is not necessarily consistent: not necessarily free of paradoxes. It follows from Gödel's incompleteness theorems that a sufficiently complicated first order logic system (which includes most common axiomatic set theories) cannot be proved consistent from within the theory itself – even if it actually is consistent. However, the common axiomatic systems are generally believed to be consistent; by their axioms they do exclude some paradoxes, like Russell's paradox. Based on Gödel's theorem, it is just not known – and never can be – if there are no paradoxes at all in these theories or in any first-order set theory. The term naive set theory is still today also used in some literature to refer to the set theories studied by Frege and Cantor, rather than to the informal counterparts of modern axiomatic set theory. Utility The choice between an axiomatic approach and other approaches is largely a matter of convenience. In everyday mathematics the best choice may be informal use of axiomatic set theory. References to particular axioms typically then occur only when demanded by tradition, e.g. the axiom of choice is often mentioned when used. Likewise, formal proofs occur only when warranted by exceptional circumstances. This informal usage of axiomatic set theory can have (depending on notation) precisely the appearance of naive set theory as outlined below. It is considerably easier to read and write (in the formulation of most statements, proofs, and lines of discussion) and is less error-prone than a strictly formal approach. Sets, membership and equality In naive set theory, a set is described as a well-defined collection of objects. These objects are called the elements or members of the set. Objects can be anything: numbers, people, other sets, etc. For instance, 4 is a member of the set of all even integers. Clearly, the set of even numbers is infinitely large; there is no requirement that a set be finite. The definition of sets goes back to Georg Cantor. He wrote in his 1915 article Beiträge zur Begründung der transfiniten Mengenlehre: Note on consistency It does not follow from this definition how sets can be formed, and what operations on sets again will produce a set. The term "well-defined" in "well-defined collection of objects" cannot, by itself, guarantee the consistency and unambiguity of what exactly constitutes and what does not constitute a set. Attempting to achieve this would be the realm of axiomatic set theory or of axiomatic class theory. The problem, in this context, with informally formulated set theories, not derived from (and implying) any particular axiomatic theory, is that there may be several widely differing formalized versions, that have both different sets and different rules for how new sets may be formed, that all conform to the original informal definition. For example, Cantor's verbatim definition allows for considerable freedom in what constitutes a set. On the other hand, it is unlikely that Cantor was particularly interested in sets containing cats and dogs, but rather only in sets containing purely mathematical objects. An example of such a class of sets could be the von Neumann universe. But even when fixing the class of sets under consideration, it is not always clear which rules for set formation are allowed without introducing paradoxes. For the purpose of fixing the discussion below, the term "well-defined" should instead be interpreted as an intention, with either implicit or explicit rules (axioms or definitions), to rule out inconsistencies. The purpose is to keep the often deep and difficult issues of consistency away from the, usually simpler, context at hand. An explicit ruling out of all conceivable inconsistencies (paradoxes) cannot be achieved for an axiomatic set theory anyway, due to Gödel's second incompleteness theorem, so this does not at all hamper the utility of naive set theory as compared to axiomatic set theory in the simple contexts considered below. It merely simplifies the discussion. Consistency is henceforth taken for granted unless explicitly mentioned. Membership If x is a member of a set A, then it is also said that x belongs to A, or that x is in A. This is denoted by x ∈ A. The symbol ∈ is a derivation from the lowercase Greek letter epsilon, "ε", introduced by Giuseppe Peano in 1889 and is the first letter of the word ἐστί (means "is"). The symbol ∉ is often used to write x ∉ A, meaning "x is not in A". Equality Two sets A and B are defined to be equal when they have precisely the same elements, that is, if every element of A is an element of B and every element of B is an element of A. (See axiom of extensionality.) Thus a set is completely determined by its elements; the description is immaterial. For example, the set with elements 2, 3, and 5 is equal to the set of all prime numbers less than 6. If the sets A and B are equal, this is denoted symbolically as A = B (as usual). Empty set The empty set, denoted as and sometimes , is a set with no members at all. Because a set is determined completely by its elements, there can be only one empty set. (See axiom of empty set.) Although the empty set has no members, it can be a member of other sets. Thus , because the former has no members and the latter has one member. Specifying sets The simplest way to describe a set is to list its elements between curly braces (known as defining a set extensionally). Thus denotes the set whose only elements are and . (See axiom of pairing.) Note the following points: The order of elements is immaterial; for example, . Repetition (multiplicity) of elements is irrelevant; for example, . (These are consequences of the definition of equality in the previous section.) This notation can be informally abused by saying something like to indicate the set of all dogs, but this example would usually be read by mathematicians as "the set containing the single element dogs". An extreme (but correct) example of this notation is , which denotes the empty set. The notation , or sometimes , is used to denote the set containing all objects for which the condition holds (known as defining a set intensionally). For example, denotes the set of real numbers, denotes the set of everything with blonde hair. This notation is called set-builder notation (or "set comprehension", particularly in the context of Functional programming). Some variants of set builder notation are: denotes the set of all that are already members of such that the condition holds for . For example, if is the set of integers, then is the set of all even integers. (See axiom of specification.) denotes the set of all objects obtained by putting members of the set into the formula . For example, is again the set of all even integers. (See axiom of replacement.) is the most general form of set builder notation. For example, {{math|{{mset|xs owner | x is a dog}}}} is the set of all dog owners. Subsets Given two sets A and B, A is a subset of B if every element of A is also an element of B. In particular, each set B is a subset of itself; a subset of B that is not equal to B is called a proper subset. If A is a subset of B, then one can also say that B is a superset of A, that A is contained in B, or that B contains A. In symbols, means that A is a subset of B, and means that B is a superset of A. Some authors use the symbols ⊂ and ⊃ for subsets, and others use these symbols only for proper subsets. For clarity, one can explicitly use the symbols ⊊ and ⊋ to indicate non-equality. As an illustration, let R be the set of real numbers, let Z be the set of integers, let O be the set of odd integers, and let P be the set of current or former U.S. Presidents. Then O is a subset of Z, Z is a subset of R, and (hence) O is a subset of R, where in all cases subset may even be read as proper subset. Not all sets are comparable in this way. For example, it is not the case either that R is a subset of P nor that P is a subset of R. It follows immediately from the definition of equality of sets above that, given two sets A and B, if and only if and . In fact this is often given as the definition of equality. Usually when trying to prove that two sets are equal, one aims to show these two inclusions. The empty set is a subset of every set (the statement that all elements of the empty set are also members of any set A is vacuously true). The set of all subsets of a given set A is called the power set of A and is denoted by or ; the "" is sometimes in a script font: . If the set A has n elements, then will have elements. Universal sets and absolute complements In certain contexts, one may consider all sets under consideration as being subsets of some given universal set. For instance, when investigating properties of the real numbers R (and subsets of R), R may be taken as the universal set. A true universal set is not included in standard set theory (see Paradoxes below), but is included in some non-standard set theories. Given a universal set U and a subset A of U, the complement of A (in U''') is defined as . In other words, AC ("A-complement"; sometimes simply A, "A-prime" ) is the set of all members of U which are not members of A. Thus with R, Z and O defined as in the section on subsets, if Z is the universal set, then OC is the set of even integers, while if R is the universal set, then OC is the set of all real numbers that are either even integers or not integers at all. Unions, intersections, and relative complements Given two sets A and B, their union is the set consisting of all objects which are elements of A or of B or of both (see axiom of union). It is denoted by . The intersection of A and B is the set of all objects which are both in A and in B. It is denoted by . Finally, the relative complement of B relative to A, also known as the set theoretic difference of A and B, is the set of all objects that belong to A but not to B. It is written as or . Symbolically, these are respectively ; ; . The set B doesn't have to be a subset of A for to make sense; this is the difference between the relative complement and the absolute complement () from the previous section. To illustrate these ideas, let A be the set of left-handed people, and let B be the set of people with blond hair. Then is the set of all left-handed blond-haired people, while is the set of all people who are left-handed or blond-haired or both. , on the other hand, is the set of all people that are left-handed but not blond-haired, while is the set of all people who have blond hair but aren't left-handed. Now let E be the set of all human beings, and let F be the set of all living things over 1000 years old. What is in this case? No living human being is over 1000 years old, so must be the empty set {}. For any set A, the power set is a Boolean algebra under the operations of union and intersection. Ordered pairs and Cartesian products Intuitively, an ordered pair is simply a collection of two objects such that one can be distinguished as the first element and the other as the second element, and having the fundamental property that, two ordered pairs are equal if and only if their first elements are equal and their second elements are equal. Formally, an ordered pair with first coordinate a, and second coordinate b, usually denoted by (a, b), can be defined as the set It follows that, two ordered pairs (a,b) and (c,d) are equal if and only if and . Alternatively, an ordered pair can be formally thought of as a set {a,b} with a total order. (The notation (a, b) is also used to denote an open interval on the real number line, but the context should make it clear which meaning is intended. Otherwise, the notation ]a, b[ may be used to denote the open interval whereas (a, b) is used for the ordered pair). If A and B are sets, then the Cartesian product (or simply product) is defined to be: That is, is the set of all ordered pairs whose first coordinate is an element of A and whose second coordinate is an element of B. This definition may be extended to a set of ordered triples, and more generally to sets of ordered n-tuples for any positive integer n. It is even possible to define infinite Cartesian products, but this requires a more recondite definition of the product. Cartesian products were first developed by René Descartes in the context of analytic geometry. If R denotes the set of all real numbers, then represents the Euclidean plane and represents three-dimensional Euclidean space. Some important sets There are some ubiquitous sets for which the notation is almost universal. Some of these are listed below. In the list, a, b, and c refer to natural numbers, and r and s are real numbers. Natural numbers are used for counting. A blackboard bold capital N () often represents this set. Integers appear as solutions for x in equations like x + a = b. A blackboard bold capital Z () often represents this set (from the German Zahlen, meaning numbers). Rational numbers appear as solutions to equations like a + bx = c. A blackboard bold capital Q () often represents this set (for quotient, because R is used for the set of real numbers). Algebraic numbers appear as solutions to polynomial equations (with integer coefficients) and may involve radicals (including ) and certain other irrational numbers. A Q with an overline () often represents this set. The overline denotes the operation of algebraic closure. Real numbers represent the "real line" and include all numbers that can be approximated by rationals. These numbers may be rational or algebraic but may also be transcendental numbers, which cannot appear as solutions to polynomial equations with rational coefficients. A blackboard bold capital R () often represents this set. Complex numbers are sums of a real and an imaginary number: . Here either or (or both) can be zero; thus, the set of real numbers and the set of strictly imaginary numbers are subsets of the set of complex numbers, which form an algebraic closure for the set of real numbers, meaning that every polynomial with coefficients in has at least one root in this set. A blackboard bold capital C () often represents this set. Note that since a number can be identified with a point in the plane, is basically "the same" as the Cartesian product ("the same" meaning that any point in one determines a unique point in the other and for the result of calculations, it doesn't matter which one is used for the calculation, as long as multiplication rule is appropriate for ). Paradoxes in early set theory The unrestricted formation principle of sets referred to as the axiom schema of unrestricted comprehension, is the source of several early appearing paradoxes: led, in the year 1897, to the Burali-Forti paradox, the first published antinomy. produced Cantor's paradox in 1897. yielded Cantor's second antinomy in the year 1899. Here the property is true for all , whatever may be, so would be a universal set, containing everything. , i.e. the set of all sets that do not contain themselves as elements, gave Russell's paradox in 1902. If the axiom schema of unrestricted comprehension is weakened to the axiom schema of specification or axiom schema of separation', then all the above paradoxes disappear. There is a corollary. With the axiom schema of separation as an axiom of the theory, it follows, as a theorem of the theory: Or, more spectacularly (Halmos' phrasing): There is no universe. Proof: Suppose that it exists and call it . Now apply the axiom schema of separation with and for use . This leads to Russell's paradox again. Hence cannot exist in this theory. Related to the above constructions is formation of the set , where the statement following the implication certainly is false. It follows, from the definition of , using the usual inference rules (and some afterthought when reading the proof in the linked article below) both that and holds, hence . This is Curry's paradox. It is (perhaps surprisingly) not the possibility of that is problematic. It is again the axiom schema of unrestricted comprehension allowing for . With the axiom schema of specification instead of unrestricted comprehension, the conclusion does not hold and hence is not a logical consequence. Nonetheless, the possibility of is often removed explicitly or, e.g. in ZFC, implicitly, by demanding the axiom of regularity to hold. One consequence of it is or, in other words, no set is an element of itself. The axiom schema of separation is simply too weak (while unrestricted comprehension is a very strong axiom—too strong for set theory) to develop set theory with its usual operations and constructions outlined above. The axiom of regularity is of a restrictive nature as well. Therefore, one is led to the formulation of other axioms to guarantee the existence of enough sets to form a set theory. Some of these have been described informally above and many others are possible. Not all conceivable axioms can be combined freely into consistent theories. For example, the axiom of choice of ZFC is incompatible with the conceivable "every set of reals is Lebesgue measurable". The former implies the latter is false. See also Algebra of sets Axiomatic set theory Internal set theory List of set identities and relations Set theory Set (mathematics) Partially ordered set Notes References Bourbaki, N., Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany, 1994. ; see also pdf version Devlin, K.J., The Joy of Sets: Fundamentals of Contemporary Set Theory, 2nd edition, Springer-Verlag, New York, NY, 1993. María J. Frápolli|Frápolli, María J., 1991, "Is Cantorian set theory an iterative conception of set?". Modern Logic, v. 1 n. 4, 1991, 302–318. Kelley, J.L., General Topology, Van Nostrand Reinhold, New York, NY, 1955. van Heijenoort, J., From Frege to Gödel, A Source Book in Mathematical Logic, 1879-1931'', Harvard University Press, Cambridge, MA, 1967. Reprinted with corrections, 1977. . External links Beginnings of set theory page at St. Andrews Earliest Known Uses of Some of the Words of Mathematics (S) Set theory Systems of set theory
Naive set theory
[ "Mathematics" ]
4,900
[ "Mathematical logic", "Set theory" ]
4,964
https://en.wikipedia.org/wiki/Bernoulli%20number
In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function. The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by and ; they differ only for , where and . For every odd , . For every even , is negative if is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials , with and . The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Takakazu. Seki's discovery was posthumously published in 1712 in his work Katsuyō Sanpō; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine; it is disputed whether Lovelace or Babbage developed the algorithm. As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program. Notation The superscript used in this article distinguishes the two sign conventions for Bernoulli numbers. Only the term is affected: with ( / ) is the sign convention prescribed by NIST and most modern textbooks. with ( / ) was used in the older literature, and (since 2022) by Donald Knuth following Peter Luschny's "Bernoulli Manifesto". In the formulas below, one can switch from one sign convention to the other with the relation , or for integer = 2 or greater, simply ignore it. Since for all odd , and many formulas only involve even-index Bernoulli numbers, a few authors write "" instead of . This article does not follow that notation. History Early history The Bernoulli numbers are rooted in the early history of the computation of sums of integer powers, which have been of interest to mathematicians since antiquity. Methods to calculate the sum of the first positive integers, the sum of the squares and of the cubes of the first positive integers were known, but there were no real 'formulas', only descriptions given entirely in words. Among the great mathematicians of antiquity to consider this problem were Pythagoras (c. 572–497 BCE, Greece), Archimedes (287–212 BCE, Italy), Aryabhata (b. 476, India), Abu Bakr al-Karaji (d. 1019, Persia) and Abu Ali al-Hasan ibn al-Hasan ibn al-Haytham (965–1039, Iraq). During the late sixteenth and early seventeenth centuries mathematicians made significant progress. In the West Thomas Harriot (1560–1621) of England, Johann Faulhaber (1580–1635) of Germany, Pierre de Fermat (1601–1665) and fellow French mathematician Blaise Pascal (1623–1662) all played important roles. Thomas Harriot seems to have been the first to derive and write formulas for sums of powers using symbolic notation, but even he calculated only up to the sum of the fourth powers. Johann Faulhaber gave formulas for sums of powers up to the 17th power in his 1631 Academia Algebrae, far higher than anyone before him, but he did not give a general formula. Blaise Pascal in 1654 proved Pascal's identity relating to the sums of the th powers of the first positive integers for . The Swiss mathematician Jakob Bernoulli (1654–1705) was the first to realize the existence of a single sequence of constants which provide a uniform formula for all sums of powers. The joy Bernoulli experienced when he hit upon the pattern needed to compute quickly and easily the coefficients of his formula for the sum of the th powers for any positive integer can be seen from his comment. He wrote: "With the help of this table, it took me less than half of a quarter of an hour to find that the tenth powers of the first 1000 numbers being added together will yield the sum 91,409,924,241,424,243,424,241,924,242,500." Bernoulli's result was published posthumously in Ars Conjectandi in 1713. Seki Takakazu independently discovered the Bernoulli numbers and his result was published a year earlier, also posthumously, in 1712. However, Seki did not present his method as a formula based on a sequence of constants. Bernoulli's formula for sums of powers is the most useful and generalizable formulation to date. The coefficients in Bernoulli's formula are now called Bernoulli numbers, following a suggestion of Abraham de Moivre. Bernoulli's formula is sometimes called Faulhaber's formula after Johann Faulhaber who found remarkable ways to calculate sum of powers but never stated Bernoulli's formula. According to Knuth a rigorous proof of Faulhaber's formula was first published by Carl Jacobi in 1834. Knuth's in-depth study of Faulhaber's formula concludes (the nonstandard notation on the LHS is explained further on): "Faulhaber never discovered the Bernoulli numbers; i.e., he never realized that a single sequence of constants ... would provide a uniform for all sums of powers. He never mentioned, for example, the fact that almost half of the coefficients turned out to be zero after he had converted his formulas for from polynomials in to polynomials in ." In the above Knuth meant ; instead using the formula avoids subtraction: Reconstruction of "Summae Potestatum" The Bernoulli numbers (n)/(n) were introduced by Jakob Bernoulli in the book Ars Conjectandi published posthumously in 1713 page 97. The main formula can be seen in the second half of the corresponding facsimile. The constant coefficients denoted , , and by Bernoulli are mapped to the notation which is now prevalent as , , , . The expression means – the small dots are used as grouping symbols. Using today's terminology these expressions are falling factorial powers . The factorial notation as a shortcut for was not introduced until 100 years later. The integral symbol on the left hand side goes back to Gottfried Wilhelm Leibniz in 1675 who used it as a long letter for "summa" (sum). The letter on the left hand side is not an index of summation but gives the upper limit of the range of summation which is to be understood as . Putting things together, for positive , today a mathematician is likely to write Bernoulli's formula as: This formula suggests setting when switching from the so-called 'archaic' enumeration which uses only the even indices 2, 4, 6... to the modern form (more on different conventions in the next paragraph). Most striking in this context is the fact that the falling factorial has for the value . Thus Bernoulli's formula can be written if , recapturing the value Bernoulli gave to the coefficient at that position. The formula for in the first half of the quotation by Bernoulli above contains an error at the last term; it should be instead of . Definitions Many characterizations of the Bernoulli numbers have been found in the last 300 years, and each could be used to introduce these numbers. Here only four of the most useful ones are mentioned: a recursive equation, an explicit formula, a generating function, an integral expression. For the proof of the equivalence of the four approaches. Recursive definition The Bernoulli numbers obey the sum formulas where and denotes the Kronecker delta. The first of these is sometimes written as the formula (for m > 1) where the power is expanded formally using the binomial theorem and is replaced by . Solving for gives the recursive formulas Explicit definition In 1893 Louis Saalschütz listed a total of 38 explicit formulas for the Bernoulli numbers, usually giving some reference in the older literature. One of them is (for ): Generating function The exponential generating functions are where the substitution is . The two generating functions only differ by t. If we let and then Then and for the m term in the series for is: If then we find that showing that the values of obey the recursive formula for the Bernoulli numbers . The (ordinary) generating function is an asymptotic series. It contains the trigamma function . Integral Expression From the generating functions above, one can obtain the following integral formula for the even Bernoulli numbers: Bernoulli numbers and the Riemann zeta function The Bernoulli numbers can be expressed in terms of the Riemann zeta function:           for  . Here the argument of the zeta function is 0 or negative. As is zero for negative even integers (the trivial zeroes), if n>1 is odd, is zero. By means of the zeta functional equation and the gamma reflection formula the following relation can be obtained: for  . Now the argument of the zeta function is positive. It then follows from () and Stirling's formula that for  . Efficient computation of Bernoulli numbers In some applications it is useful to be able to compute the Bernoulli numbers through modulo , where is a prime; for example to test whether Vandiver's conjecture holds for , or even just to determine whether is an irregular prime. It is not feasible to carry out such a computation using the above recursive formulae, since at least (a constant multiple of) arithmetic operations would be required. Fortunately, faster methods have been developed which require only operations (see big notation). David Harvey describes an algorithm for computing Bernoulli numbers by computing modulo for many small primes , and then reconstructing via the Chinese remainder theorem. Harvey writes that the asymptotic time complexity of this algorithm is and claims that this implementation is significantly faster than implementations based on other methods. Using this implementation Harvey computed for . Harvey's implementation has been included in SageMath since version 3.1. Prior to that, Bernd Kellner computed to full precision for in December 2002 and Oleksandr Pavlyk for with Mathematica in April 2008. {| class="wikitable defaultright col1left" ! Computer !! Year !! n !! Digits* |- | J. Bernoulli || ~1689 || 10 || 1 |- | L. Euler || 1748 || 30 || 8 |- | J. C. Adams || 1878 || 62 || 36 |- | D. E. Knuth, T. J. Buckholtz || 1967 || || |- | G. Fee, S. Plouffe || 1996 || || |- | G. Fee, S. Plouffe || 1996 || || |- | B. C. Kellner || 2002 || || |- | O. Pavlyk || 2008 || || |- | D. Harvey || 2008 || || |} * Digits is to be understood as the exponent of 10 when is written as a real number in normalized scientific notation. Applications of the Bernoulli numbers Asymptotic analysis Arguably the most important application of the Bernoulli numbers in mathematics is their use in the Euler–Maclaurin formula. Assuming that is a sufficiently often differentiable function the Euler–Maclaurin formula can be written as This formulation assumes the convention . Using the convention the formula becomes Here (i.e. the zeroth-order derivative of is just ). Moreover, let denote an antiderivative of . By the fundamental theorem of calculus, Thus the last formula can be further simplified to the following succinct form of the Euler–Maclaurin formula This form is for example the source for the important Euler–Maclaurin expansion of the zeta function Here denotes the rising factorial power. Bernoulli numbers are also frequently used in other kinds of asymptotic expansions. The following example is the classical Poincaré-type asymptotic expansion of the digamma function . Sum of powers Bernoulli numbers feature prominently in the closed form expression of the sum of the th powers of the first positive integers. For define This expression can always be rewritten as a polynomial in of degree . The coefficients of these polynomials are related to the Bernoulli numbers by Bernoulli's formula: where denotes the binomial coefficient. For example, taking to be 1 gives the triangular numbers . Taking to be 2 gives the square pyramidal numbers . Some authors use the alternate convention for Bernoulli numbers and state Bernoulli's formula in this way: Bernoulli's formula is sometimes called Faulhaber's formula after Johann Faulhaber who also found remarkable ways to calculate sums of powers. Faulhaber's formula was generalized by V. Guo and J. Zeng to a -analog. Taylor series The Bernoulli numbers appear in the Taylor series expansion of many trigonometric functions and hyperbolic functions. Laurent series The Bernoulli numbers appear in the following Laurent series: Digamma function: Use in topology The Kervaire–Milnor formula for the order of the cyclic group of diffeomorphism classes of exotic -spheres which bound parallelizable manifolds involves Bernoulli numbers. Let be the number of such exotic spheres for , then The Hirzebruch signature theorem for the genus of a smooth oriented closed manifold of dimension 4n also involves Bernoulli numbers. Connections with combinatorial numbers The connection of the Bernoulli number to various kinds of combinatorial numbers is based on the classical theory of finite differences and on the combinatorial interpretation of the Bernoulli numbers as an instance of a fundamental combinatorial principle, the inclusion–exclusion principle. Connection with Worpitzky numbers The definition to proceed with was developed by Julius Worpitzky in 1883. Besides elementary arithmetic only the factorial function and the power function is employed. The signless Worpitzky numbers are defined as They can also be expressed through the Stirling numbers of the second kind A Bernoulli number is then introduced as an inclusion–exclusion sum of Worpitzky numbers weighted by the harmonic sequence 1, , , ... This representation has . Consider the sequence , . From Worpitzky's numbers , applied to is identical to the Akiyama–Tanigawa transform applied to (see Connection with Stirling numbers of the first kind). This can be seen via the table: {| style="text-align:center" |+ Identity ofWorpitzky's representation and Akiyama–Tanigawa transform |- |1|| || || || || ||0||1|| || || || ||0||0||1|| || || ||0||0||0||1|| || ||0||0||0||0||1|| |- |1||−1|| || || || ||0||2||−2|| || || ||0||0||3||−3|| || ||0||0||0||4||−4|| || || || || || || |- |1||−3||2|| || || ||0||4||−10||6|| || ||0||0||9||−21||12|| || || || || || || || || || || || || |- |1||−7||12||−6|| || ||0||8||−38||54||−24|| || || || || || || || || || || || || || || || || || || |- |1||−15||50||−60||24|| || || || || || || || || || || || || || || || || || || || || || || || || |- |} The first row represents . Hence for the second fractional Euler numbers () / (): A second formula representing the Bernoulli numbers by the Worpitzky numbers is for The simplified second Worpitzky's representation of the second Bernoulli numbers is: () / () = × () / () which links the second Bernoulli numbers to the second fractional Euler numbers. The beginning is: The numerators of the first parentheses are (see Connection with Stirling numbers of the first kind). Connection with Stirling numbers of the second kind If one defines the Bernoulli polynomials as: where for are the Bernoulli numbers, and is a Stirling number of the second kind. One also has the following for Bernoulli polynomials, The coefficient of in is . Comparing the coefficient of in the two expressions of Bernoulli polynomials, one has: (resulting in ) which is an explicit formula for Bernoulli numbers and can be used to prove Von-Staudt Clausen theorem. Connection with Stirling numbers of the first kind The two main formulas relating the unsigned Stirling numbers of the first kind to the Bernoulli numbers (with ) are and the inversion of this sum (for , ) Here the number are the rational Akiyama–Tanigawa numbers, the first few of which are displayed in the following table. {| class="wikitable" style="text-align:center" |+ Akiyama–Tanigawa number ! !!0!!1!!2!!3!!4 |- ! 0 | 1 || || || || |- ! 1 | || || || || ... |- ! 2 | || || || ... || ... |- ! 3 | 0 || || ... || ... || ... |- ! 4 | − || ... || ... || ... || ... |} The Akiyama–Tanigawa numbers satisfy a simple recurrence relation which can be exploited to iteratively compute the Bernoulli numbers. This leads to the algorithm shown in the section 'algorithmic description' above. See /. An autosequence is a sequence which has its inverse binomial transform equal to the signed sequence. If the main diagonal is zeroes = , the autosequence is of the first kind. Example: , the Fibonacci numbers. If the main diagonal is the first upper diagonal multiplied by 2, it is of the second kind. Example: /, the second Bernoulli numbers (see ). The Akiyama–Tanigawa transform applied to = 1/ leads to (n) / (n + 1). Hence: {| class="wikitable" style="text-align:center" |+ Akiyama–Tanigawa transform for the second Euler numbers |- ! !! 0 !! 1 !! 2 !! 3 !! 4 |- ! 0 | 1 || || || || |- ! 1 | || || || || ... |- ! 2 | 0 || || || ... || ... |- ! 3 | − || − || ... || ... || ... |- ! 4 | 0 || ... || ... || ... || ... |} See and . () / () are the second (fractional) Euler numbers and an autosequence of the second kind. ( = ) × ( = ) = = . Also valuable for / (see Connection with Worpitzky numbers). Connection with Pascal's triangle There are formulas connecting Pascal's triangle to Bernoulli numbers where is the determinant of a n-by-n Hessenberg matrix part of Pascal's triangle whose elements are: Example: Connection with Eulerian numbers There are formulas connecting Eulerian numbers to Bernoulli numbers: Both formulae are valid for if is set to . If is set to − they are valid only for and respectively. A binary tree representation The Stirling polynomials are related to the Bernoulli numbers by . S. C. Woon described an algorithm to compute as a binary tree: Woon's recursive algorithm (for ) starts by assigning to the root node . Given a node of the tree, the left child of the node is and the right child . A node is written as in the initial part of the tree represented above with ± denoting the sign of . Given a node the factorial of is defined as Restricted to the nodes of a fixed tree-level the sum of is , thus For example: Integral representation and continuation The integral has as special values for . For example, and . Here, is the Riemann zeta function, and is the imaginary unit. Leonhard Euler (Opera Omnia, Ser. 1, Vol. 10, p. 351) considered these numbers and calculated Another similar integral representation is The relation to the Euler numbers and The Euler numbers are a sequence of integers intimately connected with the Bernoulli numbers. Comparing the asymptotic expansions of the Bernoulli and the Euler numbers shows that the Euler numbers are in magnitude approximately times larger than the Bernoulli numbers . In consequence: This asymptotic equation reveals that lies in the common root of both the Bernoulli and the Euler numbers. In fact could be computed from these rational approximations. Bernoulli numbers can be expressed through the Euler numbers and vice versa. Since, for odd , (with the exception ), it suffices to consider the case when is even. These conversion formulas express a connection between the Bernoulli and the Euler numbers. But more important, there is a deep arithmetic root common to both kinds of numbers, which can be expressed through a more fundamental sequence of numbers, also closely tied to . These numbers are defined for as The magic of these numbers lies in the fact that they turn out to be rational numbers. This was first proved by Leonhard Euler in a landmark paper De summis serierum reciprocarum (On the sums of series of reciprocals) and has fascinated mathematicians ever since. The first few of these numbers are ( / ) These are the coefficients in the expansion of . The Bernoulli numbers and Euler numbers can be understood as special views of these numbers, selected from the sequence and scaled for use in special applications. The expression [ even] has the value 1 if is even and 0 otherwise (Iverson bracket). These identities show that the quotient of Bernoulli and Euler numbers at the beginning of this section is just the special case of when is even. The are rational approximations to and two successive terms always enclose the true value of . Beginning with the sequence starts ( / ): These rational numbers also appear in the last paragraph of Euler's paper cited above. Consider the Akiyama–Tanigawa transform for the sequence () / (): {| class="wikitable" style="text-align:right;" ! 0 |1||||0||−||−||−||0 |- ! 1 | || 1|| || 0|| −|| −|| |- ! 2 | −|| || || || || || |- ! 3 | −1|| −|| −|| || || || |- ! 4 | || −|| −|| || || || |- ! 5 | 8|| || || || || || |- ! 6 | −|| || || || || || |} From the second, the numerators of the first column are the denominators of Euler's formula. The first column is − × . An algorithmic view: the Seidel triangle The sequence Sn has another unexpected yet important property: The denominators of Sn+1 divide the factorial . In other words: the numbers , sometimes called Euler zigzag numbers, are integers. (). See (). Their exponential generating function is the sum of the secant and tangent functions. . Thus the above representations of the Bernoulli and Euler numbers can be rewritten in terms of this sequence as These identities make it easy to compute the Bernoulli and Euler numbers: the Euler numbers are given immediately by and the Bernoulli numbers are fractions obtained from by some easy shifting, avoiding rational arithmetic. What remains is to find a convenient way to compute the numbers . However, already in 1877 Philipp Ludwig von Seidel published an ingenious algorithm, which makes it simple to calculate . Start by putting 1 in row 0 and let denote the number of the row currently being filled If is odd, then put the number on the left end of the row in the first position of the row , and fill the row from the left to the right, with every entry being the sum of the number to the left and the number to the upper At the end of the row duplicate the last number. If is even, proceed similar in the other direction. Seidel's algorithm is in fact much more general (see the exposition of Dominique Dumont ) and was rediscovered several times thereafter. Similar to Seidel's approach D. E. Knuth and T. J. Buckholtz gave a recurrence equation for the numbers and recommended this method for computing and 'on electronic computers using only simple operations on integers'. V. I. Arnold rediscovered Seidel's algorithm and later Millar, Sloane and Young popularized Seidel's algorithm under the name boustrophedon transform. Triangular form: {| style="text-align:right" | || || || || || || 1|| || || || || || |- | || || || || || 1|| || 1|| || || || || |- | || || || || 2|| || 2|| || 1|| || || || |- | || || || 2|| || 4|| || 5|| || 5|| || || |- | || || 16|| || 16|| || 14|| || 10|| || 5|| || |- | || 16|| || 32|| || 46|| || 56|| || 61|| || 61|| |- |272|| ||272|| ||256|| ||224|| ||178|| ||122|| || 61 |} Only , with one 1, and , with two 1s, are in the OEIS. Distribution with a supplementary 1 and one 0 in the following rows: {| style="text-align:right" | || || || || || || 1|| || || || || || |- | || || || || || 0|| || 1|| || || || || |- | || || || || −1|| || −1|| || 0|| || || || |- | || || || 0|| || −1|| || −2|| || −2|| || || |- | || || 5|| || 5|| || 4|| || 2|| || 0|| || |- | || 0|| || 5|| || 10|| || 14|| || 16|| || 16|| |- |−61|| ||−61|| ||−56|| ||−46|| ||−32|| ||−16|| || 0 |} This is , a signed version of . The main andiagonal is . The main diagonal is . The central column is . Row sums: 1, 1, −2, −5, 16, 61.... See . See the array beginning with 1, 1, 0, −2, 0, 16, 0 below. The Akiyama–Tanigawa algorithm applied to () / () yields: {| style="text-align:right" | 1|| 1|| || 0|| −|| −|| − |- | 0|| 1|| || 1|| 0|| − |- | −1|| −1|| || 4|| |- | 0|| −5|| −|| 1 |- | 5|| 5|| − |- | 0|| 61 |- | −61 |} 1. The first column is . Its binomial transform leads to: {| style="text-align:right" |- | 1|| 1|| 0|| −2|| 0|| 16|| 0 |- |0||−1||−2||2||16||−16 |- |−1||−1||4||14||−32 |- |0||5||10||−46 |- |5||5||−56 |- |0||−61 |- |−61 |} The first row of this array is . The absolute values of the increasing antidiagonals are . The sum of the antidiagonals is 2. The second column is . Its binomial transform yields: {| style="text-align:right" |- | 1|| 2|| 2|| −4|| −16|| 32|| 272 |- |1||0||−6||−12||48||240 |- |−1||−6||−6||60||192 |- |−5||0||66||32 |- |5||66||66 |- |61||0 |- |−61 |} The first row of this array is . The absolute values of the second bisection are the double of the absolute values of the first bisection. Consider the Akiyama-Tanigawa algorithm applied to () / ( () = abs( ()) + 1 = . {| style="text-align:right" |1||2||2||||1|||| |- |−1||0||||2||||0 |- |−1||−3||−||3|| |- |2||−3||−||−13 |- |5||21||− |- |−16||45 |- |−61 |} The first column whose the absolute values are could be the numerator of a trigonometric function. is an autosequence of the first kind (the main diagonal is ). The corresponding array is: {| style="text-align:right" |0||−1||−1||2||5||−16||−61 |- |−1||0||3||3||−21||−45 |- |1||3||0||−24||−24 |- |2||−3||−24||0 |- |−5||−21||24 |- |−16||45 |- |−61 |} The first two upper diagonals are =  × . The sum of the antidiagonals is = 2 × (n + 1). − is an autosequence of the second kind, like for instance / . Hence the array: {| style="text-align:right" |- |2||1||−1||−2||5||16||−61 |- |−1||−2||−1||7||11||−77 |- |−1||1||8||4||−88 |- |2||7||−4||−92 |- |5||−11||−88 |- |−16||−77 |- |−61 |} The main diagonal, here , is the double of the first upper one, here . The sum of the antidiagonals is = 2 × (1).  −  = 2 × . A combinatorial view: alternating permutations Around 1880, three years after the publication of Seidel's algorithm, Désiré André proved a now classic result of combinatorial analysis. Looking at the first terms of the Taylor expansion of the trigonometric functions and André made a startling discovery. The coefficients are the Euler numbers of odd and even index, respectively. In consequence the ordinary expansion of has as coefficients the rational numbers . André then succeeded by means of a recurrence argument to show that the alternating permutations of odd size are enumerated by the Euler numbers of odd index (also called tangent numbers) and the alternating permutations of even size by the Euler numbers of even index (also called secant numbers). Related sequences The arithmetic mean of the first and the second Bernoulli numbers are the associate Bernoulli numbers: , , , , , / . Via the second row of its inverse Akiyama–Tanigawa transform , they lead to Balmer series / . The Akiyama–Tanigawa algorithm applied to () / () leads to the Bernoulli numbers / , / , or without , named intrinsic Bernoulli numbers . {| style="text-align:center; padding-left; padding-right: 2em;" |- |1|||||||| |- ||||||||| |- |0|||||||| |- |−||−||−||−||0 |- |0||−||−||−||− |} Hence another link between the intrinsic Bernoulli numbers and the Balmer series via (). () = 0, 2, 1, 6,... is a permutation of the non-negative numbers. The terms of the first row are f(n) = . 2, f(n) is an autosequence of the second kind. 3/2, f(n) leads by its inverse binomial transform to 3/2 −1/2 1/3 −1/4 1/5 ... = 1/2 + log 2. Consider g(n) = 1/2 – 1 / (n+2) = 0, 1/6, 1/4, 3/10, 1/3. The Akiyama-Tanagiwa transforms gives: {| style="text-align:center; padding-left; padding-right:2em;" |- |0||||||||||||... |- |−||−||−||−||−||−||... |- |0||−||−||−||−||−||... |- |||||||||0||−||... |} 0, g(n), is an autosequence of the second kind. Euler () / () without the second term () are the fractional intrinsic Euler numbers The corresponding Akiyama transform is: {| style="text-align:center; padding-left; padding-right: 2em;" |- |1||1|||||| |- |0|||||||| |- |−||−||0|||| |- |0||−||−||−||− |- |||||−||−||− |} The first line is . preceded by a zero is an autosequence of the first kind. It is linked to the Oresme numbers. The numerators of the second line are preceded by 0. The difference table is: {| style="text-align:center; padding-left; padding-right: 2em;" |- |0||1||1|||||||| |- |1||0||−||−||−||−||− |- |−1||−||0|||||||| |} Arithmetical properties of the Bernoulli numbers The Bernoulli numbers can be expressed in terms of the Riemann zeta function as for integers provided for the expression is understood as the limiting value and the convention is used. This intimately relates them to the values of the zeta function at negative integers. As such, they could be expected to have and do have deep arithmetical properties. For example, the Agoh–Giuga conjecture postulates that is a prime number if and only if is congruent to −1 modulo . Divisibility properties of the Bernoulli numbers are related to the ideal class groups of cyclotomic fields by a theorem of Kummer and its strengthening in the Herbrand-Ribet theorem, and to class numbers of real quadratic fields by Ankeny–Artin–Chowla. The Kummer theorems The Bernoulli numbers are related to Fermat's Last Theorem (FLT) by Kummer's theorem, which says: If the odd prime does not divide any of the numerators of the Bernoulli numbers then has no solutions in nonzero integers. Prime numbers with this property are called regular primes. Another classical result of Kummer are the following congruences. Let be an odd prime and an even number such that does not divide . Then for any non-negative integer A generalization of these congruences goes by the name of -adic continuity. -adic continuity If , and are positive integers such that and are not divisible by and , then Since , this can also be written where and , so that and are nonpositive and not congruent to 1 modulo . This tells us that the Riemann zeta function, with taken out of the Euler product formula, is continuous in the -adic numbers on odd negative integers congruent modulo to a particular , and so can be extended to a continuous function for all -adic integers the -adic zeta function. Ramanujan's congruences The following relations, due to Ramanujan, provide a method for calculating Bernoulli numbers that is more efficient than the one given by their original recursive definition: Von Staudt–Clausen theorem The von Staudt–Clausen theorem was given by Karl Georg Christian von Staudt and Thomas Clausen independently in 1840. The theorem states that for every , is an integer. The sum extends over all primes for which divides . A consequence of this is that the denominator of is given by the product of all primes for which divides . In particular, these denominators are square-free and divisible by 6. Why do the odd Bernoulli numbers vanish? The sum can be evaluated for negative values of the index . Doing so will show that it is an odd function for even values of , which implies that the sum has only terms of odd index. This and the formula for the Bernoulli sum imply that is 0 for even and ; and that the term for is cancelled by the subtraction. The von Staudt–Clausen theorem combined with Worpitzky's representation also gives a combinatorial answer to this question (valid for n > 1). From the von Staudt–Clausen theorem it is known that for odd the number is an integer. This seems trivial if one knows beforehand that the integer in question is zero. However, by applying Worpitzky's representation one gets as a sum of integers, which is not trivial. Here a combinatorial fact comes to surface which explains the vanishing of the Bernoulli numbers at odd index. Let be the number of surjective maps from } to }, then . The last equation can only hold if This equation can be proved by induction. The first two examples of this equation are , . Thus the Bernoulli numbers vanish at odd index because some non-obvious combinatorial identities are embodied in the Bernoulli numbers. A restatement of the Riemann hypothesis The connection between the Bernoulli numbers and the Riemann zeta function is strong enough to provide an alternate formulation of the Riemann hypothesis (RH) which uses only the Bernoulli numbers. In fact Marcel Riesz proved that the RH is equivalent to the following assertion: For every there exists a constant (depending on ) such that as . Here is the Riesz function denotes the rising factorial power in the notation of D. E. Knuth. The numbers occur frequently in the study of the zeta function and are significant because is a -integer for primes where does not divide . The are called divided Bernoulli numbers. Generalized Bernoulli numbers The generalized Bernoulli numbers are certain algebraic numbers, defined similarly to the Bernoulli numbers, that are related to special values of Dirichlet -functions in the same way that Bernoulli numbers are related to special values of the Riemann zeta function. Let be a Dirichlet character modulo . The generalized Bernoulli numbers attached to are defined by Apart from the exceptional , we have, for any Dirichlet character , that if . Generalizing the relation between Bernoulli numbers and values of the Riemann zeta function at non-positive integers, one has the for all integers : where is the Dirichlet -function of . Eisenstein–Kronecker number Eisenstein–Kronecker numbers are an analogue of the generalized Bernoulli numbers for imaginary quadratic fields. They are related to critical L-values of Hecke characters. Appendix Assorted identities See also Bernoulli polynomial Bernoulli polynomials of the second kind Bernoulli umbra Bell number Euler number Genocchi number Kummer's congruences Poly-Bernoulli number Hurwitz zeta function Euler summation Stirling polynomial Sums of powers Notes References Bibliography . . . . . . . . . . . . . . . . . . . . . External links The first 498 Bernoulli Numbers from Project Gutenberg A multimodular algorithm for computing Bernoulli numbers The Bernoulli Number Page Bernoulli number programs at LiteratePrograms Number theory Topology Integer sequences Eponymous numbers in mathematics
Bernoulli number
[ "Physics", "Mathematics" ]
9,313
[ "Sequences and series", "Discrete mathematics", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Topology", "Space", "Geometry", "Spacetime", "Numbers", "Number theory" ]
5,346
https://en.wikipedia.org/wiki/Colloid
A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre. Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color. Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi, who called them pseudosolutions, and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861. Classification Colloids can be classified as follows: Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols. Hydrocolloids Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber. The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness. Components Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin. Compared with solution A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na+ and Cl− ions are surrounded by water molecules.  However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution. Interaction between particles The following forces play an important role in the interaction of colloid particles: Excluded volume repulsion: This refers to the impossibility of any overlap between hard particles. Electrostatic interaction: Colloidal particles often carry an electrical charge and therefore attract or repel each other. The charge of both the continuous and the dispersed phase, as well as the mobility of the phases are factors affecting this interaction. van der Waals forces: This is due to interaction between two dipoles that are either permanent or induced. Even if the particles do not have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole in a particle. This temporary dipole induces a dipole in particles nearby. The temporary dipole and the induced dipoles are then attracted to each other. This is known as van der Waals force, and is always present (unless the refractive indexes of the dispersed and continuous phases are matched), is short-range, and is attractive. Steric forces: A repulsive steric force typically occurring due to adsorbed polymers coating a colloid's surface. Depletion forces: An attractive entropic force arising from an osmotic pressure imbalance when colloids are suspended in a medium of much smaller particles or polymers called depletants. Sedimentation velocity The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement. The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force: where is the Archimedean weight of the colloidal particles, is the viscosity of the suspension medium, is the radius of the colloidal particle, and is the sedimentation or creaming velocity. The mass of the colloidal particle is found using: where is the volume of the colloidal particle, calculated using the volume of a sphere , and is the difference in mass density between the colloidal particle and the suspension medium. By rearranging, the sedimentation or creaming velocity is: There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension. The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion. Preparation There are two principal ways to prepare colloids: Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing). Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold. Stabilization The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system. A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension. If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension. Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation. Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte. Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents. A combination of the two mechanisms is also possible (electrosteric stabilization). A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum. Destabilization Destabilization can be accomplished by different methods: Removal of the electrostatic barrier that prevents aggregation of the particles. This can be accomplished by the addition of salt to a suspension to reduce the Debye screening length (the width of the electrical double layer) of the particles. It is also accomplished by changing the pH of a suspension to effectively neutralise the surface charge of the particles in suspension. This removes the repulsive forces that keep colloidal particles separate and allows for aggregation due to van der Waals forces. Minor changes in pH can manifest in significant alteration to the zeta potential. When the magnitude of the zeta potential lies below a certain threshold, typically around ± 5mV, rapid coagulation or aggregation tends to occur. Addition of a charged polymer flocculant. Polymer flocculants can bridge individual colloidal particles by attractive electrostatic interactions. For example, negatively charged colloidal silica or clay particles can be flocculated by the addition of a positively charged polymer. Addition of non-adsorbed polymers called depletants that cause aggregation due to entropic effects. Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied. Monitoring stability The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids. Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles. Accelerating methods for shelf life prediction The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times. Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration. As a model system for atoms In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions. Crystals A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave. Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids. The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation. In biology Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates. In the environment Colloidal particles can also serve as transport vector of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks (e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "eigencolloid" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations because of the process of ultrafiltration occurring in dense clay membrane. The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules. In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH. Intravenous therapy Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids. References Chemical mixtures Colloidal chemistry Condensed matter physics Soft matter Dosage forms
Colloid
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,133
[ "Colloidal chemistry", "Soft matter", "Phases of matter", "Materials science", "Colloids", "Surface science", "Chemical mixtures", "Condensed matter physics", "nan", "Matter" ]
5,371
https://en.wikipedia.org/wiki/Concrete
Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures to a solid over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined. When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete. In the past, lime-based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ. Etymology The word concrete comes from the Latin word "" (meaning compact or condensed), the perfect passive participle of "", from "-" (together) and "" (to grow). History Ancient times Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400 to 1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures. Mayan concrete at the ruins of Uxmal (AD 850–925) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock." Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day. In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction. Classical era The Romans used concrete extensively from 300 BC to AD 476. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome. Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick. Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (c. ). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed: Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension. The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time. The use of hot mixing and the presence of lime clasts are thought to give the concrete a self-healing ability, where cracks that form become filled with calcite that prevents the crack from spreading. The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon. Middle Ages After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added. The Canal du Midi was built using concrete in 1670. Industrial era Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate. A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement. Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853. The first concrete reinforced bridge was designed and built by Joseph Monier in 1875. Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928. Composition Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product. Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand. Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete. Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces. Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar. The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure. Cement Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds, which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum). Cement kilns are extremely large, complex, and inherently dusty industrial installations. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels. The five major compounds of calcium silicates and aluminates comprising Portland cement range from 5 to 50% in weight. Curing Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely. As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. The hydration of cement involves many concurrent reactions. The process involves polymerization, the interlinking of the silicates and aluminate components as well as their bonding to sand and gravel particles to form a solid mass. One illustrative conversion is the hydration of tricalcium silicate: Cement chemist notation: C3S + H → C-S-H + CH + heat Standard notation: Ca3SiO5 + H2O → CaO・SiO2・H2O (gel) + Ca(OH)2 + heat Balanced: 2 Ca3SiO5 + 7 H2O → 3 CaO・2 SiO2・4 H2O (gel) + 3 Ca(OH)2 + heat (approximately as the exact ratios of CaO, SiO2 and H2O in C-S-H can vary) The hydration (curing) of cement is irreversible. Aggregates Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted. The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete. Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients. Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers. Admixtures Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See below.) The common types of admixtures are as follows: Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are calcium chloride, calcium nitrate and sodium nitrate. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored, even though they are less effective than the chloride salt. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather. Air entraining agents add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a tradeoff with strength, as each 1% of air may decrease compressive strength by 5%. If too much air becomes trapped in the concrete as a result of the mixing process, defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse. Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance. Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete. Crystalline admixtures are typically added during batching of the concrete to lower permeability. The reaction takes place when exposed to water and un-hydrated cement particles to form insoluble needle-shaped crystals, which fill capillary pores and micro-cracks in the concrete to block pathways for water and waterborne contaminates. Concrete with crystalline admixture can expect to self-seal as constant exposure to water will continuously initiate crystallization to ensure permanent waterproof protection. Pigments can be used to change the color of concrete, for aesthetics. Plasticizers increase the workability of plastic, or "fresh", concrete, allowing it to be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics. Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Superplasticizers are used to increase compressive strength. It increases the workability of the concrete and lowers the need for water content by 15–30%. Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding. Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical retarders include sugar, sodium gluconate, citric acid, and tartaric acid. Mineral admixtures and blended cements Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices. Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties. Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties. Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability. High reactivity metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important. Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young's modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high electrical conductivity. Carbon products have been added to make concrete electrically conductive, for deicing purposes. New research from Japan's University of Kitakyushu shows that a washed and dried recycled mix of used diapers can be an environmental solution to producing less landfill and using less sand in concrete production. A model home was built in Indonesia to test the strength and durability of the new diaper-cement composite. Production Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided. Concrete plants come in two main types, ready-mix plants and central mix plants. A ready-mix plant blends all of the solid ingredients, while a central mix does the same but adds water. A central-mix plant offers more precise control of the concrete quality. Central mix plants must be close to the work site where the concrete will be used, since hydration begins at the plant. A concrete plant consists of large hoppers for storage of various ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck. Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms. The forms are containers that define the desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products. Interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product. Design mix Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate, a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix. Concrete mixes are primarily divided into nominal mix, standard mix and design mix. Nominal mix ratios are given in volume of . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance. Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cure strength. Mixing Thorough mixing is essential to produce uniform, high-quality concrete. has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a , shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment. Sample analysis—workability Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish. Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of . A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test. Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix. High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted. After mixing, concrete is a fluid and can be pumped to the location where needed. Curing Maintaining optimal conditions for cement hydration Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars. Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength. Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause spalling, reduced strength, poor abrasion resistance and cracking. Curing techniques avoiding water loss by evaporation During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use. Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete. For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly. Alternative types Asphalt Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt. The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material. Graphene enhanced concrete Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene is added. These enhanced graphene concretes are designed around the concrete application. Microbial Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength. Nanoconcrete Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated. Pervious Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding. Polymer Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains. Plant fibers Plant fibers and particles can be used in a concrete mix or as a reinforcement. These materials can increase ductility but the lignocellulosic particles hydrolyze during concrete curing as a result of alkaline environment and elevated temperatures Such process, that is difficult to measure, can affect the properties of the resulting concrete. Sulfur concrete Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water. Volcanic Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation. Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy. Waste light Waste light is a form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m3 of shredded waste and no other aggregates. Recycled Aggregate Concrete (RAC) Recycled aggregate concretes are standard concrete mixes with the addition or substitution of natural aggregates with recycled aggregates sourced from construction and demolition wastes, disused pre-cast concretes or masonry. In most cases, recycled aggregate concrete results in higher water absorption levels by capillary action and permeation, which are the prominent determiners of the strength and durability of the resulting concrete. The increase in water absorption levels is mainly caused by the porous adhered mortar that exists in the recycled aggregates. Accordingly, recycled concrete aggregates that have been washed to reduce the quantity of mortar adhered to aggregates show lower water absorption levels compared to untreated recycled aggregates. The quality of the recycled aggregate concrete is determined by several factors, including the size, the number of replacement cycles, and the moisture levels of the recycled aggregates. When the recycled concrete aggregates are crushed into coarser fractures, the mixed concrete shows better permeability levels, resulting in an overall increase in strength. In contrast, recycled masonry aggregates provide better qualities when crushed in finer fractures. With each generation of recycled concrete, the resulting compressive strength decreases. Properties Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep. Tests can be performed to ensure that the properties of concrete correspond to specifications for the application. The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures. The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons. Energy efficiency The cement produced for making concrete accounts for about 8% of worldwide emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials. Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Fire safety Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad. Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces. Earthquake safety As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey). Construction Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth. Reinforced The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces. Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element. Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking. Precast Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site. Advantages to be achieved by employing precast concrete: Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue. Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'. Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards. Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products. High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs. Mass structures Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures. Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass. Surface finishes Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing. Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants. Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials. The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures. Prestressed Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting. There are two different systems being used: Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them. Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete. More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture. Placement Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater. Cold weather placement Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing. The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is: A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and Temperature stays below for more than one-half of any 24-hour period. In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1: When the air temperature is ≤ 5 °C, and When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete. The minimum strength before exposing concrete to extreme cold is . CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing. Underwater placement Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork. A tremie is a vertical, or near-vertical, pipe with a hopper at the top used to pour concrete underwater in a way that avoids washout of cement from the mix due to turbulent water contact with the concrete while it is flowing. This produces a more reliable strength of the product. The method is generally used for placing small quantities and for repairs. Wet concrete is loaded into a reusable canvas bag and squeezed out at the required place by the diver. Care must be taken to avoid washout of the cement and fines. is the manual placement by divers of woven cloth bags containing dry mix, followed by piercing the bags with steel rebar pins to tie the bags together after every two or three layers, and create a path for hydration to induce curing, which can typically take about 6 to 12 hours for initial hardening and full hardening by the next day. Bagwork concrete will generally reach full strength within 28 days. Each bag must be pierced by at least one, and preferably up to four pins. Bagwork is a simple and convenient method of underwater concrete placement which does not require pumps, plant, or formwork, and which can minimise environmental effects from dispersing cement in the water. Prefilled bags are available, which are sealed to prevent premature hydration if stored in suitable dry conditions. The bags may be biodegradable. is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled from the bottom by displacing the water with pumped grout. Roads Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground. Tube forest Cement molded into a forest of tubular structures can be 5.6 times more resistant to cracking/failure than standard concrete. The approach mimics mammalian cortical bone that features elliptical, hollow osteons suspended in an organic matrix, connected by relatively weak "cement lines". Cement lines provide a preferable in-plane crack path. This design fails via a "stepwise toughening mechanism". Cracks are contained within the tube, reducing spreading, by dissipating energy at each tube/step. Environment, health and safety The manufacture and use of concrete produce a wide range of environmental, economic and social impacts. Health and safety Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment. Cement A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions. The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical. Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt. Climate change mitigation Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach. The embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding. Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived. Climate change adaptation High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed. End-of-life: degradation and waste Recycling There have been concerns about the recycling of painted concrete due to possible lead content. Studies have indicated that recycled concrete exhibits lower strength and durability compared to concrete produced using natural aggregates. This deficiency can be addressed by incorporating supplementary materials such as fly ash into the mixture. World records The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil. The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of . The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia. The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area. The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the cofferdam to be dewatered approximately below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry. See also Eurocode 2: Design of concrete structures References Further reading External links Advantage and Disadvantage of Concrete Release of ultrafine particles from three simulated building processes Concrete: The Quest for Greener Alternatives Building materials Composite materials Heterogeneous chemical mixtures Masonry Pavements Roofing materials Sculpture materials Articles containing video clips
Concrete
[ "Physics", "Chemistry", "Engineering" ]
13,053
[ "Structural engineering", "Masonry", "Building engineering", "Composite materials", "Architecture", "Construction", "Materials", "Chemical mixtures", "Heterogeneous chemical mixtures", "Concrete", "Matter", "Building materials" ]
5,863
https://en.wikipedia.org/wiki/Copenhagen%20interpretation
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. While "Copenhagen" refers to the Danish city, the use as an "interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails. Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object except according to the results of its measurement (that is, the Copenhagen interpretation rejects counterfactual definiteness). Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' personal beliefs and other arbitrary mental factors. Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the difficulty of defining what might count as a measuring device, and the seeming reliance upon classical physics in describing such devices. Still, including all the variations, the interpretation remains one of the most commonly taught. Background Starting in 1900, investigations into atomic and subatomic phenomena forced a revision to the basic concepts of classical physics. However, it was not until a quarter-century had elapsed that the revision reached the status of a coherent theory. During the intervening period, now known as the time of the "old quantum theory", physicists worked with approximations and heuristic corrections to classical physics. Notable results from this period include Max Planck's calculation of the blackbody radiation spectrum, Albert Einstein's explanation of the photoelectric effect, Einstein and Peter Debye's work on the specific heat of solids, Niels Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. From 1922 through 1925, this method of heuristic corrections encountered increasing difficulties; for example, the Bohr–Sommerfeld model could not be extended from hydrogen to the next simplest case, the helium atom. The transition from the old quantum theory to full-fledged quantum physics began in 1925, when Werner Heisenberg presented a treatment of electron behavior based on discussing only "observable" quantities, meaning to Heisenberg the frequencies of light that atoms absorbed and emitted. Max Born then realized that in Heisenberg's theory, the classical variables of position and momentum would instead be represented by matrices, mathematical objects that can be multiplied together like numbers with the crucial difference that the order of multiplication matters. Erwin Schrödinger presented an equation that treated the electron as a wave, and Born discovered that the way to successfully interpret the wave function that appeared in the Schrödinger equation was as a tool for calculating probabilities. Quantum mechanics cannot easily be reconciled with everyday language and observation, and has often seemed counter-intuitive to physicists, including its inventors. The ideas grouped together as the Copenhagen interpretation suggest a way to think about how the mathematics of quantum theory relates to physical reality. Origin and use of the term The 'Copenhagen' part of the term refers to the city of Copenhagen in Denmark. During the mid-1920s, Heisenberg had been an assistant to Bohr at his institute in Copenhagen. Together they helped originate quantum mechanical theory. At the 1927 Solvay Conference, a dual talk by Max Born and Heisenberg declared "we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification." In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930. In the book's preface, Heisenberg wrote: On the whole, the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics. The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, and the writings of Bohr and Heisenberg contradict each other on several important issues. It appears that the particular term, with its more definite sense, was coined by Heisenberg around 1955, while criticizing alternative "interpretations" (e.g., David Bohm's) that had been developed. Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy. Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense". In a 1960 review of Heisenberg's book, Bohr's close collaborator Léon Rosenfeld called the term an "ambiguous expression" and suggested it be discarded. However, this did not come to pass, and the term entered widespread use. Bohr's ideas in particular are distinct despite the use of his Copenhagen home in the name of the interpretation. Principles There is no uniquely definitive statement of the Copenhagen interpretation. The term encompasses the views developed by a number of scientists and philosophers during the second quarter of the 20th century. This lack of a single, authoritative source that establishes the Copenhagen interpretation is one difficulty with discussing it; another complication is that the philosophical background familiar to Einstein, Bohr, Heisenberg, and contemporaries is much less so to physicists and even philosophers of physics in more recent times. Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics, and Bohr distanced himself from what he considered Heisenberg's more subjective interpretation. Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement". Different commentators and researchers have associated various ideas with the term. Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors. N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced. Mermin described the Copenhagen interpretation as coming in different "versions", "varieties", or "flavors". Some basic principles generally accepted as part of the interpretation include the following: Quantum mechanics is intrinsically indeterministic. The correspondence principle: in the appropriate limit, quantum theory comes to resemble classical physics and reproduces the classical predictions. The Born rule: the wave function of a system yields probabilities for the outcomes of measurements upon that system. Complementarity: certain properties cannot be jointly defined for the same system at the same time. In order to talk about a specific property of a system, that system must be considered within the context of a specific laboratory arrangement. Observable quantities corresponding to mutually exclusive laboratory arrangements cannot be predicted together, but the consideration of multiple such mutually exclusive experiments is necessary to characterize a system. Hans Primas and Roland Omnès give a more detailed breakdown that, in addition to the above, includes the following: Quantum physics applies to individual objects. The probabilities computed by the Born rule do not require an ensemble or collection of "identically prepared" systems to understand. The results provided by measuring devices are essentially classical, and should be described in ordinary language. This was particularly emphasized by Bohr, and was accepted by Heisenberg. Per the above point, the device used to observe a system must be described in classical language, while the system under observation is treated in quantum terms. This is a particularly subtle issue for which Bohr and Heisenberg came to differing conclusions. According to Heisenberg, the boundary between classical and quantum can be shifted in either direction at the observer's discretion. That is, the observer has the freedom to move what would become known as the "Heisenberg cut" without changing any physically meaningful predictions. On the other hand, Bohr argued both systems are quantum in principle, and the object-instrument distinction (the "cut") is dictated by the experimental arrangement. For Bohr, the "cut" was not a change in the dynamical laws that govern the systems in question, but a change in the language applied to them. During an observation, the system must interact with a laboratory device. When that device makes a measurement, the wave function of the system collapses, irreversibly reducing to an eigenstate of the observable that is registered. The result of this process is a tangible record of the event, made by a potentiality becoming an actuality. Statements about measurements that are not actually made do not have meaning. For example, there is no meaning to the statement that a photon traversed the upper path of a Mach–Zehnder interferometer unless the interferometer were actually built in such a way that the path taken by the photon is detected and registered. Wave functions are objective, in that they do not depend upon personal opinions of individual physicists or other such arbitrary influences. There are some fundamental agreements and disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed, while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process, which could take place within the quantum system. Another issue of importance where Bohr and Heisenberg disagreed is wave–particle duality. Bohr maintained that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, whereas Heisenberg held that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations. Nature of the wave function A wave function is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the wave function together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. Generally, Copenhagen-type interpretations deny that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such, or anything more than a theoretical concept. Probabilities via the Born rule The Born rule is essential to the Copenhagen interpretation. Formulated by Max Born in 1926, it gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a particle at a given point, when measured, is proportional to the square of the magnitude of the particle's wave function at that point. Collapse The concept of wave function collapse postulates that the wave function of a system can change suddenly and discontinuously upon measurement. Prior to a measurement, a wave function involves the various probabilities for the different potential outcomes of that measurement. But when the apparatus registers one of those outcomes, no traces of the others linger. Since Bohr did not view the wavefunction as something physical, he never talks about "collapse". Nevertheless, many physicists and philosophers associate collapse with the Copenhagen interpretation. Heisenberg spoke of the wave function as representing available knowledge of a system, and did not use the term "collapse", but instead termed it "reduction" of the wave function to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus. Role of the observer Because they assert that the existence of an observed value depends upon the intercession of the observer, Copenhagen-type interpretations are sometimes called "subjective". All of the original Copenhagen protagonists considered the process of observation as mechanical and independent of the individuality of the observer. Wolfgang Pauli, for example, insisted that measurement results could be obtained and recorded by "objective registering apparatus". As Heisenberg wrote, In the 1970s and 1980s, the theory of decoherence helped to explain the appearance of quasi-classical realities emerging from quantum theory, but was insufficient to provide a technical explanation for the apparent wave function collapse. Completion by hidden variables? In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regards as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory. The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'. It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". By contrast, Max Jammer writes "Einstein never proposed a hidden variable theory." Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty. Acceptance among physicists During the 1930s and 1940s, views about quantum mechanics attributed to Bohr and emphasizing complementarity became commonplace among physicists. Textbooks of the time generally maintained the principle that the numerical value of a physical quantity is not meaningful or does not exist until it is measured. Prominent physicists associated with Copenhagen-type interpretations have included Lev Landau, Wolfgang Pauli, Rudolf Peierls, Asher Peres, Léon Rosenfeld, and Ray Streater. Throughout much of the 20th century, the Copenhagen tradition had overwhelming acceptance among physicists. According to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997, the Copenhagen interpretation remained the most widely accepted label that physicists applied to their own views. A similar result was found in a poll conducted in 2011. Consequences The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes. Schrödinger's cat This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle. Thus a description of the cat during the course of the experiment—having been entangled with the state of a subatomic particle—becomes a "blur" of "living and dead cat." But this cannot be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality." How can the cat be both alive and dead? In Copenhagen-type views, the wave function reflects our knowledge of the system. The wave function means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive. (Some versions of the Copenhagen interpretation reject the idea that a wave function can be assigned to a physical system that meets the everyday definition of "cat"; in this view, the correct quantum-mechanical description of the cat-and-particle system must include a superselection rule.) Wigner's friend "Wigner's friend" is a thought experiment intended to make that of Schrödinger's cat more striking by involving two conscious beings, traditionally known as Wigner and his friend. (In more recent literature, they may also be known as Alice and Bob, per the convention of describing protocols in information theory.) Wigner puts his friend in with the cat. The external observer believes the system is in state . However, his friend is convinced that the cat is alive, i.e. for him, the cat is in the state . How can Wigner and his friend see different wave functions? In a Heisenbergian view, the answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily (at least according to Heisenberg, though not to Bohr). If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement. Different Copenhagen-type interpretations take different positions as to whether observers can be placed on the quantum side of the cut. Double-slit experiment In the basic version of this experiment, a light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). Such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. According to Bohr's complementarity principle, light is neither a wave nor a stream of particles. A particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time. The same experiment has been performed for light, electrons, atoms, and molecules. The extremely small de Broglie wavelength of objects with larger mass makes experiments increasingly difficult, but in general quantum mechanics considers all matter as possessing both particle and wave behaviors. Einstein–Podolsky–Rosen paradox This thought experiment involves a pair of particles prepared in what later authors would refer to as an entangled state. In a 1935 paper, Einstein, Boris Podolsky, and Nathan Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "Einstein–Podolsky–Rosen (EPR) criterion of reality", positing that, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity". From this, they inferred that the second particle must have a definite value of position and of momentum prior to either being measured. Bohr's response to the EPR paper was published in the Physical Review later that same year. He argued that EPR had reasoned fallaciously. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete." Criticism Incompleteness and indeterminism Einstein was an early and persistent supporter of objective reality. Bohr and Heisenberg advanced the position that no physical property could be understood without an act of measurement, while Einstein refused to accept this. Abraham Pais recalled a walk with Einstein when the two discussed quantum mechanics: "Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it." While Einstein did not doubt that quantum mechanics was a correct physical theory in that it gave correct predictions, he maintained that it could not be a complete theory. The most famous product of his efforts to argue the incompleteness of quantum theory is the Einstein–Podolsky–Rosen thought experiment, which was intended to show that physical properties like position and momentum have values even if not measured. The argument of EPR was not generally persuasive to other physicists. Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist". Instead, he suggested that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes." Einstein was likewise dissatisfied with the indeterminism of quantum theory. Regarding the possibility of randomness in nature, Einstein said that he was "convinced that He [God] does not throw dice." Bohr, in response, reputedly said that "it cannot be for us to tell God, how he is to run the world". The Heisenberg cut Much criticism of Copenhagen-type interpretations has focused on the need for a classical domain where observers or measuring devices can reside, and the imprecision of how the boundary between quantum and classical might be defined. This boundary came to be termed the Heisenberg cut (while John Bell derisively called it the "shifty split"). As typically portrayed, Copenhagen-type interpretations involve two different kinds of time evolution for wave functions, the deterministic flow according to the Schrödinger equation and the probabilistic jump during measurement, without a clear criterion for when each kind applies. Why should these two different processes exist, when physicists and laboratory equipment are made of the same matter as the rest of the universe? And if there is somehow a split, where should it be placed? Steven Weinberg writes that the traditional presentation gives "no way to locate the boundary between the realms in which [...] quantum mechanics does or does not apply." The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe. How does an observer stand outside the universe in order to measure it, and who was there to observe the universe in its earliest stages? Advocates of Copenhagen-type interpretations have disputed the seriousness of these objections. Rudolf Peierls noted that "the observer does not have to be contemporaneous with the event"; for example, we study the early universe through the cosmic microwave background, and we can apply quantum mechanics to that just as well as to any electromagnetic field. Likewise, Asher Peres argued that physicists are, conceptually, outside those degrees of freedom that cosmology studies, and applying quantum mechanics to the radius of the universe while neglecting the physicists in it is no different from quantizing the electric current in a superconductor while neglecting the atomic-level details. Alternatives A large number of alternative interpretations have appeared, sharing some aspects of the Copenhagen interpretation while providing alternatives to other aspects. The ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". More recently, interpretations inspired by quantum information theory like QBism and relational quantum mechanics have appeared. Experts on quantum foundational issues continue to favor the Copenhagen interpretation over other alternatives. Physicists who have suggested that the Copenhagen tradition needs to be built upon or extended include Rudolf Haag and Anton Zeilinger. Under realism and determinism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many-worlds interpretation results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. The transactional interpretation is also explicitly nonlocal. Some physicists espoused views in the "Copenhagen spirit" and then went on to advocate other interpretations. For example, David Bohm and Alfred Landé both wrote textbooks that put forth ideas in the Bohr–Heisenberg tradition, and later promoted nonlocal hidden variables and an ensemble interpretation respectively. John Archibald Wheeler began his career as an "apostle of Niels Bohr"; he then supervised the PhD thesis of Hugh Everett that proposed the many-worlds interpretation. After supporting Everett's work for several years, he began to distance himself from the many-worlds interpretation in the 1970s. Late in life, he wrote that while the Copenhagen interpretation might fairly be called "the fog from the north", it "remains the best interpretation of the quantum that we have". Other physicists, while influenced by the Copenhagen tradition, have expressed frustration at how it took the mathematical formalism of quantum theory as given, rather than trying to understand how it might arise from something more fundamental. (E. T. Jaynes described the mathematical formalism of quantum physics as "a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up together by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble".) This dissatisfaction has motivated new interpretative variants as well as technical work in quantum foundations. See also Bohr–Einstein debates Einstein's thought experiments Fifth Solvay Conference Philosophical interpretation of classical physics Physical ontology Popper's experiment Von Neumann–Wigner interpretation Notes References Further reading Interpretations of quantum mechanics Quantum measurement University of Copenhagen
Copenhagen interpretation
[ "Physics" ]
5,739
[ "Interpretations of quantum mechanics", "Quantum measurement", "Quantum mechanics" ]
5,869
https://en.wikipedia.org/wiki/Category%20theory
Category theory is a general theory of mathematical structures and their relations. It was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, many constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality. Many areas of computer science also rely on category theory, such as functional programming and semantics. A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. Metaphorically, a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one. Morphism composition has similar properties as function composition (associativity and existence of an identity morphism for each object). Morphisms are often some sort of functions, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid. The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories and : it maps objects of to objects of and morphisms of to morphisms of in such a way that sources are mapped to sources, and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors. Categories, objects, and morphisms Categories A category consists of the following three mathematical entities: A class , whose elements are called objects; A class , whose elements are called morphisms or maps or arrows. Each morphism has a source object and target object .The expression would be verbally stated as " is a morphism from to ".The expression – alternatively expressed as , , or – denotes the hom-class of all morphisms from to . A binary operation , called composition of morphisms, such that for any three objects , , and , we haveThe composition of and is written as or , governed by two axioms: Associativity: If , , and then Identity: For every object , there exists a morphism (also denoted as ) called the identity morphism for , such that for every morphism , we haveFrom the axioms, it can be proved that there is exactly one identity morphism for every object. Examples The category Set As the class of objects , we choose the class of all sets. As the class of morphisms , we choose the class of all functions. Therefore, for two objects and , i.e. sets, we have to be the class of all functions such that . The composition of morphisms is simply the usual function composition, i.e. for two morphisms and , we have , , which is obviously associative. Furthermore, for every object we have the identity morphism to be the identity map , on Morphisms Relations among morphisms (such as ) are often depicted using commutative diagrams, with "points" (corners) representing objects and "arrows" representing morphisms. Morphisms can have any of the following properties. A morphism is: a monomorphism (or monic) if implies for all morphisms . an epimorphism (or epic) if implies for all morphisms . a bimorphism if f is both epic and monic. an isomorphism if there exists a morphism such that . an endomorphism if . end(a) denotes the class of endomorphisms of a. an automorphism if f is both an endomorphism and an isomorphism. aut(a) denotes the class of automorphisms of a. a retraction if a right inverse of f exists, i.e. if there exists a morphism with . a section if a left inverse of f exists, i.e. if there exists a morphism with . Every retraction is an epimorphism, and every section is a monomorphism. Furthermore, the following three statements are equivalent: f is a monomorphism and a retraction; f is an epimorphism and a section; f is an isomorphism. Functors Functors are structure-preserving maps between categories. They can be thought of as morphisms in the category of all (small) categories. A (covariant) functor F from a category C to a category D, written , consists of: for each object x in C, an object F(x) in D; and for each morphism in C, a morphism in D, such that the following two properties hold: For every object x in C, ; For all morphisms and , . A contravariant functor is like a covariant functor, except that it "turns morphisms around" ("reverses all the arrows"). More specifically, every morphism in C must be assigned to a morphism in D. In other words, a contravariant functor acts as a covariant functor from the opposite category Cop to D. Natural transformations A natural transformation is a relation between two functors. Functors often describe "natural constructions" and natural transformations then describe "natural homomorphisms" between two such constructions. Sometimes two quite different constructions yield "the same" result; this is expressed by a natural isomorphism between the two functors. If F and G are (covariant) functors between the categories C and D, then a natural transformation η from F to G associates to every object X in C a morphism in D such that for every morphism in C, we have ; this means that the following diagram is commutative: The two functors F and G are called naturally isomorphic if there exists a natural transformation from F to G such that ηX is an isomorphism for every object X in C. Other concepts Universal constructions, limits, and colimits Using the language of category theory, many areas of mathematical study can be categorized. Categories include sets, groups and topologies. Each category is distinguished by properties that all its objects have in common, such as the empty set or the product of two topologies, yet in the definition of a category, objects are considered atomic, i.e., we do not know whether an object A is a set, a topology, or any other abstract concept. Hence, the challenge is to define special objects without referring to the internal structure of those objects. To define the empty set without referring to elements, or the product topology without referring to open sets, one can characterize these objects in terms of their relations to other objects, as given by the morphisms of the respective categories. Thus, the task is to find universal properties that uniquely determine the objects of interest. Numerous important constructions can be described in a purely categorical way if the category limit can be developed and dualized to yield the notion of a colimit. Equivalent categories It is a natural question to ask: under which conditions can two categories be considered essentially the same, in the sense that theorems about one category can readily be transformed into theorems about the other category? The major tool one employs to describe such a situation is called equivalence of categories, which is given by appropriate functors between two categories. Categorical equivalence has found numerous applications in mathematics. Further concepts and results The definitions of categories and functors provide only the very basics of categorical algebra; additional important topics are listed below. Although there are strong interrelations between all of these topics, the given order can be considered as a guideline for further reading. The functor category DC has as objects the functors from C to D and as morphisms the natural transformations of such functors. The Yoneda lemma is one of the most famous basic results of category theory; it describes representable functors in functor categories. Duality: Every statement, theorem, or definition in category theory has a dual which is essentially obtained by "reversing all the arrows". If one statement is true in a category C then its dual is true in the dual category Cop. This duality, which is transparent at the level of category theory, is often obscured in applications and can lead to surprising relationships. Adjoint functors: A functor can be left (or right) adjoint to another functor that maps in the opposite direction. Such a pair of adjoint functors typically arises from a construction defined by a universal property; this can be seen as a more abstract and powerful view on universal properties. Higher-dimensional categories Many of the above concepts, especially equivalence of categories, adjoint functor pairs, and functor categories, can be situated into the context of higher-dimensional categories. Briefly, if we consider a morphism between two objects as a "process taking us from one object to another", then higher-dimensional categories allow us to profitably generalize this by considering "higher-dimensional processes". For example, a (strict) 2-category is a category together with "morphisms between morphisms", i.e., processes which allow us to transform one morphism into another. We can then "compose" these "bimorphisms" both horizontally and vertically, and we require a 2-dimensional "exchange law" to hold, relating the two composition laws. In this context, the standard example is Cat, the 2-category of all (small) categories, and in this example, bimorphisms of morphisms are simply natural transformations of morphisms in the usual sense. Another basic example is to consider a 2-category with a single object; these are essentially monoidal categories. Bicategories are a weaker notion of 2-dimensional categories in which the composition of morphisms is not strictly associative, but only associative "up to" an isomorphism. This process can be extended for all natural numbers n, and these are called n-categories. There is even a notion of ω-category corresponding to the ordinal number ω. Higher-dimensional categories are part of the broader mathematical field of higher-dimensional algebra, a concept introduced by Ronald Brown. For a conversational introduction to these ideas, see John Baez, 'A Tale of n-categories' (1996). Historical notes Whilst specific examples of functors and natural transformations had been given by Samuel Eilenberg and Saunders Mac Lane in a 1942 paper on group theory, these concepts were introduced in a more general sense, together with the additional notion of categories, in a 1945 paper by the same authors (who discussed applications of category theory to the field of algebraic topology). Their work was an important part of the transition from intuitive and geometric homology to homological algebra, Eilenberg and Mac Lane later writing that their goal was to understand natural transformations, which first required the definition of functors, then categories. Stanislaw Ulam, and some writing on his behalf, have claimed that related ideas were current in the late 1930s in Poland. Eilenberg was Polish, and studied mathematics in Poland in the 1930s. Category theory is also, in some sense, a continuation of the work of Emmy Noether (one of Mac Lane's teachers) in formalizing abstract processes; Noether realized that understanding a type of mathematical structure requires understanding the processes that preserve that structure (homomorphisms). Eilenberg and Mac Lane introduced categories for understanding and formalizing the processes (functors) that relate topological structures to algebraic structures (topological invariants) that characterize them. Category theory was originally introduced for the need of homological algebra, and widely extended for the need of modern algebraic geometry (scheme theory). Category theory may be viewed as an extension of universal algebra, as the latter studies algebraic structures, and the former applies to any kind of mathematical structure and studies also the relationships between structures of different nature. For this reason, it is used throughout mathematics. Applications to mathematical logic and semantics (categorical abstract machine) came later. Certain categories called topoi (singular topos) can even serve as an alternative to axiomatic set theory as a foundation of mathematics. A topos can also be considered as a specific type of category with two additional topos axioms. These foundational applications of category theory have been worked out in fair detail as a basis for, and justification of, constructive mathematics. Topos theory is a form of abstract sheaf theory, with geometric origins, and leads to ideas such as pointless topology. Categorical logic is now a well-defined field based on type theory for intuitionistic logics, with applications in functional programming and domain theory, where a cartesian closed category is taken as a non-syntactic description of a lambda calculus. At the very least, category theoretic language clarifies what exactly these related areas have in common (in some abstract sense). Category theory has been applied in other fields as well, see applied category theory. For example, John Baez has shown a link between Feynman diagrams in physics and monoidal categories. Another application of category theory, more specifically topos theory, has been made in mathematical music theory, see for example the book The Topos of Music, Geometric Logic of Concepts, Theory, and Performance by Guerino Mazzola. More recent efforts to introduce undergraduates to categories as a foundation for mathematics include those of William Lawvere and Rosebrugh (2003) and Lawvere and Stephen Schanuel (1997) and Mirroslav Yotov (2012). See also Domain theory Enriched category theory Glossary of category theory Group theory Higher category theory Higher-dimensional algebra Important publications in category theory Lambda calculus Outline of category theory Timeline of category theory and related mathematics Applied category theory Notes References Citations Sources . . . Notes for a course offered as part of the MSc. in Mathematical Logic, Manchester University. . , draft of a book. Based on . Further reading External links Theory and Application of Categories, an electronic journal of category theory, full text, free, since 1995. Cahiers de Topologie et Géométrie Différentielle Catégoriques, an electronic journal of category theory, full text, free, funded in 1957. nLab, a wiki project on mathematics, physics and philosophy with emphasis on the n-categorical point of view. The n-Category Café, essentially a colloquium on topics in category theory. Category Theory, a web page of links to lecture notes and freely available books on category theory. , a formal introduction to category theory. , with an extensive bibliography. List of academic conferences on category theory — An informal introduction to higher order categories. WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, categories, functors, natural transformations, universal properties. , a channel about category theory. . Video archive of recorded talks relevant to categories, logic and the foundations of physics. Interactive Web page which generates examples of categorical constructions in the category of finite sets. Category Theory for the Sciences, an instruction on category theory as a tool throughout the sciences. Category Theory for Programmers A book in blog form explaining category theory for computer programmers. Introduction to category theory. Higher category theory Foundations of mathematics
Category theory
[ "Mathematics" ]
3,259
[ "Functions and mappings", "Mathematical structures", "Foundations of mathematics", "Mathematical objects", "Higher category theory", "Fields of abstract algebra", "Category theory", "Mathematical relations" ]
5,905
https://en.wikipedia.org/wiki/Chalcogen
|- ! colspan=2 style="text-align:left;" | ↓ Period |- | 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The chalcogens (ore forming) ( ) are the chemical elements in group 16 of the periodic table. This group is also known as the oxygen family. Group 16 consists of the elements oxygen (O), sulfur (S), selenium (Se), tellurium (Te), and the radioactive elements polonium (Po) and livermorium (Lv). Often, oxygen is treated separately from the other chalcogens, sometimes even excluded from the scope of the term "chalcogen" altogether, due to its very different chemical behavior from sulfur, selenium, tellurium, and polonium. The word "chalcogen" is derived from a combination of the Greek word () principally meaning copper (the term was also used for bronze, brass, any metal in the poetic sense, ore and coin), and the Latinized Greek word , meaning born or produced. Sulfur has been known since antiquity, and oxygen was recognized as an element in the 18th century. Selenium, tellurium and polonium were discovered in the 19th century, and livermorium in 2000. All of the chalcogens have six valence electrons, leaving them two electrons short of a full outer shell. Their most common oxidation states are −2, +2, +4, and +6. They have relatively low atomic radii, especially the lighter ones. All of the naturally occurring chalcogens have some role in biological functions, either as a nutrient or a toxin. Selenium is an important nutrient (among others as a building block of selenocysteine) but is also commonly toxic. Tellurium often has unpleasant effects (although some organisms can use it), and polonium (especially the isotope polonium-210) is always harmful as a result of its radioactivity. Sulfur has more than 20 allotropes, oxygen has nine, selenium has at least eight, polonium has two, and only one crystal structure of tellurium has so far been discovered. There are numerous organic chalcogen compounds. Not counting oxygen, organic sulfur compounds are generally the most common, followed by organic selenium compounds and organic tellurium compounds. This trend also occurs with chalcogen pnictides and compounds containing chalcogens and carbon group elements. Oxygen is generally obtained by separation of air into nitrogen and oxygen. Sulfur is extracted from oil and natural gas. Selenium and tellurium are produced as byproducts of copper refining. Polonium is most available in naturally occurring actinide-containing materials. Livermorium has been synthesized in particle accelerators. The primary use of elemental oxygen is in steelmaking. Sulfur is mostly converted into sulfuric acid, which is heavily used in the chemical industry. Selenium's most common application is glassmaking. Tellurium compounds are mostly used in optical disks, electronic devices, and solar cells. Some of polonium's applications are due to its radioactivity. Properties Atomic and physical Chalcogens show similar patterns in electron configuration, especially in the outermost shells, where they all have the same number of valence electrons, resulting in similar trends in chemical behavior: All chalcogens have six valence electrons. All of the solid, stable chalcogens are soft and do not conduct heat well. Electronegativity decreases towards the chalcogens with higher atomic numbers. Density, melting and boiling points, and atomic and ionic radii tend to increase towards the chalcogens with higher atomic numbers. Isotopes Out of the six known chalcogens, one (oxygen) has an atomic number equal to a nuclear magic number, which means that their atomic nuclei tend to have increased stability towards radioactive decay. Oxygen has three stable isotopes, and 14 unstable ones. Sulfur has four stable isotopes, 20 radioactive ones, and one isomer. Selenium has six observationally stable or nearly stable isotopes, 26 radioactive isotopes, and 9 isomers. Tellurium has eight stable or nearly stable isotopes, 31 unstable ones, and 17 isomers. Polonium has 42 isotopes, none of which are stable. It has an additional 28 isomers. In addition to the stable isotopes, some radioactive chalcogen isotopes occur in nature, either because they are decay products, such as 210Po, because they are primordial, such as 82Se, because of cosmic ray spallation, or via nuclear fission of uranium. Livermorium isotopes 288Lv through 293Lv have been discovered; the most stable livermorium isotope is 293Lv, which has a half-life of 0.061 seconds. With the exception of livermorium, all chalcogens have at least one naturally occurring radioisotope: oxygen has trace 15O, sulfur has trace 35S, selenium has 82Se, tellurium has 128Te and 130Te, and polonium has 210Po. Among the lighter chalcogens (oxygen and sulfur), the most neutron-poor isotopes undergo proton emission, the moderately neutron-poor isotopes undergo electron capture or β+ decay, the moderately neutron-rich isotopes undergo β− decay, and the most neutron rich isotopes undergo neutron emission. The middle chalcogens (selenium and tellurium) have similar decay tendencies as the lighter chalcogens, but no proton-emitting isotopes have been observed, and some of the most neutron-deficient isotopes of tellurium undergo alpha decay. Polonium isotopes tend to decay via alpha or beta decay. Isotopes with nonzero nuclear spins are more abundant in nature among the chalcogens selenium and tellurium than they are with sulfur. Allotropes Oxygen's most common allotrope is diatomic oxygen, or O2, a reactive paramagnetic molecule that is ubiquitous to aerobic organisms and has a blue color in its liquid state. Another allotrope is O3, or ozone, which is three oxygen atoms bonded together in a bent formation. There is also an allotrope called tetraoxygen, or O4, and six allotropes of solid oxygen including "red oxygen", which has the formula O8. Sulfur has over 20 known allotropes, which is more than any other element except carbon. The most common allotropes are in the form of eight-atom rings, but other molecular allotropes that contain as few as two atoms or as many as 20 are known. Other notable sulfur allotropes include rhombic sulfur and monoclinic sulfur. Rhombic sulfur is the more stable of the two allotropes. Monoclinic sulfur takes the form of long needles and is formed when liquid sulfur is cooled to slightly below its melting point. The atoms in liquid sulfur are generally in the form of long chains, but above 190 °C, the chains begin to break down. If liquid sulfur above 190 °C is frozen very rapidly, the resulting sulfur is amorphous or "plastic" sulfur. Gaseous sulfur is a mixture of diatomic sulfur (S2) and 8-atom rings. Selenium has at least eight distinct allotropes. The gray allotrope, commonly referred to as the "metallic" allotrope, despite not being a metal, is stable and has a hexagonal crystal structure. The gray allotrope of selenium is soft, with a Mohs hardness of 2, and brittle. Four other allotropes of selenium are metastable. These include two monoclinic red allotropes and two amorphous allotropes, one of which is red and one of which is black. The red allotrope converts to the black allotrope in the presence of heat. The gray allotrope of selenium is made from spirals on selenium atoms, while one of the red allotropes is made of stacks of selenium rings (Se8). Tellurium is not known to have any allotropes, although its typical form is hexagonal. Polonium has two allotropes, which are known as α-polonium and β-polonium. α-polonium has a cubic crystal structure and converts to the rhombohedral β-polonium at 36 °C. The chalcogens have varying crystal structures. Oxygen's crystal structure is monoclinic, sulfur's is orthorhombic, selenium and tellurium have the hexagonal crystal structure, while polonium has a cubic crystal structure. Chemical Oxygen, sulfur, and selenium are nonmetals, and tellurium is a metalloid, meaning that its chemical properties are between those of a metal and those of a nonmetal. It is not certain whether polonium is a metal or a metalloid. Some sources refer to polonium as a metalloid, although it has some metallic properties. Also, some allotropes of selenium display characteristics of a metalloid, even though selenium is usually considered a nonmetal. Even though oxygen is a chalcogen, its chemical properties are different from those of other chalcogens. One reason for this is that the heavier chalcogens have vacant d-orbitals. Oxygen's electronegativity is also much higher than those of the other chalcogens. This makes oxygen's electric polarizability several times lower than those of the other chalcogens. For covalent bonding a chalcogen may accept two electrons according to the octet rule, leaving two lone pairs. When an atom forms two single bonds, they form an angle between 90° and 120°. In 1+ cations, such as , a chalcogen forms three molecular orbitals arranged in a trigonal pyramidal fashion and one lone pair. Double bonds are also common in chalcogen compounds, for example in chalcogenates (see below). The oxidation number of the most common chalcogen compounds with positive metals is −2. However the tendency for chalcogens to form compounds in the −2 state decreases towards the heavier chalcogens. Other oxidation numbers, such as −1 in pyrite and peroxide, do occur. The highest formal oxidation number is +6. This oxidation number is found in sulfates, selenates, tellurates, polonates, and their corresponding acids, such as sulfuric acid. Oxygen is the most electronegative element except for fluorine, and forms compounds with almost all of the chemical elements, including some of the noble gases. It commonly bonds with many metals and metalloids to form oxides, including iron oxide, titanium oxide, and silicon oxide. Oxygen's most common oxidation state is −2, and the oxidation state −1 is also relatively common. With hydrogen it forms water and hydrogen peroxide. Organic oxygen compounds are ubiquitous in organic chemistry. Sulfur's oxidation states are −2, +2, +4, and +6. Sulfur-containing analogs of oxygen compounds often have the prefix thio-. Sulfur's chemistry is similar to oxygen's, in many ways. One difference is that sulfur-sulfur double bonds are far weaker than oxygen-oxygen double bonds, but sulfur-sulfur single bonds are stronger than oxygen-oxygen single bonds. Organic sulfur compounds such as thiols have a strong specific smell, and a few are utilized by some organisms. Selenium's oxidation states are −2, +4, and +6. Selenium, like most chalcogens, bonds with oxygen. There are some organic selenium compounds, such as selenoproteins. Tellurium's oxidation states are −2, +2, +4, and +6. Tellurium forms the oxides tellurium monoxide, tellurium dioxide, and tellurium trioxide. Polonium's oxidation states are +2 and +4. There are many acids containing chalcogens, including sulfuric acid, sulfurous acid, selenic acid, and telluric acid. All hydrogen chalcogenides are toxic except for water. Oxygen ions often come in the forms of oxide ions (), peroxide ions (), and hydroxide ions (). Sulfur ions generally come in the form of sulfides (), bisulfides (), sulfites (), sulfates (), and thiosulfates (). Selenium ions usually come in the form of selenides (), selenites () and selenates (). Tellurium ions often come in the form of tellurates (). Molecules containing metal bonded to chalcogens are common as minerals. For example, pyrite (FeS2) is an iron ore, and the rare mineral calaverite is the ditelluride . Although all group 16 elements of the periodic table, including oxygen, can be defined as chalcogens, oxygen and oxides are usually distinguished from chalcogens and chalcogenides. The term chalcogenide is more commonly reserved for sulfides, selenides, and tellurides, rather than for oxides. Except for polonium, the chalcogens are all fairly similar to each other chemically. They all form X2− ions when reacting with electropositive metals. Sulfide minerals and analogous compounds produce gases upon reaction with oxygen. Compounds With halogens Chalcogens also form compounds with halogens known as chalcohalides, or chalcogen halides. The majority of simple chalcogen halides are well-known and widely used as chemical reagents. However, more complicated chalcogen halides, such as sulfenyl, sulfonyl, and sulfuryl halides, are less well known to science. Out of the compounds consisting purely of chalcogens and halogens, there are a total of 13 chalcogen fluorides, nine chalcogen chlorides, eight chalcogen bromides, and six chalcogen iodides that are known. The heavier chalcogen halides often have significant molecular interactions. Sulfur fluorides with low valences are fairly unstable and little is known about their properties. However, sulfur fluorides with high valences, such as sulfur hexafluoride, are stable and well-known. Sulfur tetrafluoride is also a well-known sulfur fluoride. Certain selenium fluorides, such as selenium difluoride, have been produced in small amounts. The crystal structures of both selenium tetrafluoride and tellurium tetrafluoride are known. Chalcogen chlorides and bromides have also been explored. In particular, selenium dichloride and sulfur dichloride can react to form organic selenium compounds. Dichalcogen dihalides, such as Se2Cl2 also are known to exist. There are also mixed chalcogen-halogen compounds. These include SeSX, with X being chlorine or bromine. Such compounds can form in mixtures of sulfur dichloride and selenium halides. These compounds have been fairly recently structurally characterized, as of 2008. In general, diselenium and disulfur chlorides and bromides are useful chemical reagents. Chalcogen halides with attached metal atoms are soluble in organic solutions. One example of such a compound is . Unlike selenium chlorides and bromides, selenium iodides have not been isolated, as of 2008, although it is likely that they occur in solution. Diselenium diiodide, however, does occur in equilibrium with selenium atoms and iodine molecules. Some tellurium halides with low valences, such as and , form polymers when in the solid state. These tellurium halides can be synthesized by the reduction of pure tellurium with superhydride and reacting the resulting product with tellurium tetrahalides. Ditellurium dihalides tend to get less stable as the halides become lower in atomic number and atomic mass. Tellurium also forms iodides with even fewer iodine atoms than diiodides. These include TeI and Te2I. These compounds have extended structures in the solid state. Halogens and chalcogens can also form halochalcogenate anions. Organic Alcohols, phenols and other similar compounds contain oxygen. However, in thiols, selenols and tellurols; sulfur, selenium, and tellurium replace oxygen. Thiols are better known than selenols or tellurols. Aside from alcohols, thiols are the most stable chalcogenols and tellurols are the least stable, being unstable in heat or light. Other organic chalcogen compounds include thioethers, selenoethers and telluroethers. Some of these, such as dimethyl sulfide, diethyl sulfide, and dipropyl sulfide are commercially available. Selenoethers are in the form of R2Se or RSeR. Telluroethers such as dimethyl telluride are typically prepared in the same way as thioethers and selenoethers. Organic chalcogen compounds, especially organic sulfur compounds, have the tendency to smell unpleasant. Dimethyl telluride also smells unpleasant, and selenophenol is renowned for its "metaphysical stench". There are also thioketones, selenoketones, and telluroketones. Out of these, thioketones are the most well-studied with 80% of chalcogenoketones papers being about them. Selenoketones make up 16% of such papers and telluroketones make up 4% of them. Thioketones have well-studied non-linear electric and photophysical properties. Selenoketones are less stable than thioketones and telluroketones are less stable than selenoketones. Telluroketones have the highest level of polarity of chalcogenoketones. With metals There is a very large number of metal chalcogenides. There are also ternary compounds containing alkali metals and transition metals. Highly metal-rich metal chalcogenides, such as Lu7Te and Lu8Te have domains of the metal's crystal lattice containing chalcogen atoms. While these compounds do exist, analogous chemicals that contain lanthanum, praseodymium, gadolinium, holmium, terbium, or ytterbium have not been discovered, as of 2008. The boron group metals aluminum, gallium, and indium also form bonds to chalcogens. The Ti3+ ion forms chalcogenide dimers such as TiTl5Se8. Metal chalcogenide dimers also occur as lower tellurides, such as Zr5Te6. Elemental chalcogens react with certain lanthanide compounds to form lanthanide clusters rich in chalcogens. Uranium(IV) chalcogenol compounds also exist. There are also transition metal chalcogenols which have potential to serve as catalysts and stabilize nanoparticles. With pnictogens Compounds with chalcogen-phosphorus bonds have been explored for more than 200 years. These compounds include unsophisticated phosphorus chalcogenides as well as large molecules with biological roles and phosphorus-chalcogen compounds with metal clusters. These compounds have numerous applications, including organo-phosphate insecticides, strike-anywhere matches and quantum dots. A total of 130,000 compounds with at least one phosphorus-sulfur bond, 6000 compounds with at least one phosphorus-selenium bond, and 350 compounds with at least one phosphorus-tellurium bond have been discovered. The decrease in the number of chalcogen-phosphorus compounds further down the periodic table is due to diminishing bond strength. Such compounds tend to have at least one phosphorus atom in the center, surrounded by four chalcogens and side chains. However, some phosphorus-chalcogen compounds also contain hydrogen (such as secondary phosphine chalcogenides) or nitrogen (such as dichalcogenoimidodiphosphates). Phosphorus selenides are typically harder to handle that phosphorus sulfides, and compounds in the form PxTey have not been discovered. Chalcogens also bond with other pnictogens, such as arsenic, antimony, and bismuth. Heavier chalcogen pnictides tend to form ribbon-like polymers instead of individual molecules. Chemical formulas of these compounds include Bi2S3 and Sb2Se3. Ternary chalcogen pnictides are also known. Examples of these include P4O6Se and P3SbS3. salts containing chalcogens and pnictogens also exist. Almost all chalcogen pnictide salts are typically in the form of [PnxE4x]3−, where Pn is a pnictogen and E is a chalcogen. Tertiary phosphines can react with chalcogens to form compounds in the form of R3PE, where E is a chalcogen. When E is sulfur, these compounds are relatively stable, but they are less so when E is selenium or tellurium. Similarly, secondary phosphines can react with chalcogens to form secondary phosphine chalcogenides. However, these compounds are in a state of equilibrium with chalcogenophosphinous acid. Secondary phosphine chalcogenides are weak acids. Binary compounds consisting of antimony or arsenic and a chalcogen. These compounds tend to be colorful and can be created by a reaction of the constituent elements at temperatures of . Other Chalcogens form single bonds and double bonds with other carbon group elements than carbon, such as silicon, germanium, and tin. Such compounds typically form from a reaction of carbon group halides and chalcogenol salts or chalcogenol bases. Cyclic compounds with chalcogens, carbon group elements, and boron atoms exist, and occur from the reaction of boron dichalcogenates and carbon group metal halides. Compounds in the form of M-E, where M is silicon, germanium, or tin, and E is sulfur, selenium or tellurium have been discovered. These form when carbon group hydrides react or when heavier versions of carbenes react. Sulfur and tellurium can bond with organic compounds containing both silicon and phosphorus. All of the chalcogens form hydrides. In some cases this occurs with chalcogens bonding with two hydrogen atoms. However tellurium hydride and polonium hydride are both volatile and highly labile. Also, oxygen can bond to hydrogen in a 1:1 ratio as in hydrogen peroxide, but this compound is unstable. Chalcogen compounds form a number of interchalcogens. For instance, sulfur forms the toxic sulfur dioxide and sulfur trioxide. Tellurium also forms oxides. There are some chalcogen sulfides as well. These include selenium sulfide, an ingredient in some shampoos. Since 1990, a number of borides with chalcogens bonded to them have been detected. The chalcogens in these compounds are mostly sulfur, although some do contain selenium instead. One such chalcogen boride consists of two molecules of dimethyl sulfide attached to a boron-hydrogen molecule. Other important boron-chalcogen compounds include macropolyhedral systems. Such compounds tend to feature sulfur as the chalcogen. There are also chalcogen borides with two, three, or four chalcogens. Many of these contain sulfur but some, such as Na2B2Se7 contain selenium instead. History Early discoveries Sulfur has been known since ancient times and is mentioned in the Bible fifteen times. It was known to the ancient Greeks and commonly mined by the ancient Romans. In the Middle Ages, it was a key part of alchemical experiments. In the 1700s and 1800s, scientists Joseph Louis Gay-Lussac and Louis-Jacques Thénard proved sulfur to be a chemical element. Early attempts to separate oxygen from air were hampered by the fact that air was thought of as a single element up to the 17th and 18th centuries. Robert Hooke, Mikhail Lomonosov, Ole Borch, and Pierre Bayden all successfully created oxygen, but did not realize it at the time. Oxygen was discovered by Joseph Priestley in 1774 when he focused sunlight on a sample of mercuric oxide and collected the resulting gas. Carl Wilhelm Scheele had also created oxygen in 1771 by the same method, but Scheele did not publish his results until 1777. Tellurium was first discovered in 1783 by Franz Joseph Müller von Reichenstein. He discovered tellurium in a sample of what is now known as calaverite. Müller assumed at first that the sample was pure antimony, but tests he ran on the sample did not agree with this. Muller then guessed that the sample was bismuth sulfide, but tests confirmed that the sample was not that. For some years, Muller pondered the problem. Eventually he realized that the sample was gold bonded with an unknown element. In 1796, Müller sent part of the sample to the German chemist Martin Klaproth, who purified the undiscovered element. Klaproth decided to call the element tellurium after the Latin word for earth. Selenium was discovered in 1817 by Jöns Jacob Berzelius. Berzelius noticed a reddish-brown sediment at a sulfuric acid manufacturing plant. The sample was thought to contain arsenic. Berzelius initially thought that the sediment contained tellurium, but came to realize that it also contained a new element, which he named selenium after the Greek moon goddess Selene. Periodic table placing Three of the chalcogens (sulfur, selenium, and tellurium) were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner as having similar properties. Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music. His version included a "group b" consisting of oxygen, sulfur, selenium, tellurium, and osmium. After 1869, Dmitri Mendeleev proposed his periodic table placing oxygen at the top of "group VI" above sulfur, selenium, and tellurium. Chromium, molybdenum, tungsten, and uranium were sometimes included in this group, but they would be later rearranged as part of group VIB; uranium would later be moved to the actinide series. Oxygen, along with sulfur, selenium, tellurium, and later polonium would be grouped in group VIA, until the group's name was changed to group 16 in 1988. Modern discoveries In the late 19th century, Marie Curie and Pierre Curie discovered that a sample of pitchblende was emitting four times as much radioactivity as could be explained by the presence of uranium alone. The Curies gathered several tons of pitchblende and refined it for several months until they had a pure sample of polonium. The discovery officially took place in 1898. Prior to the invention of particle accelerators, the only way to produce polonium was to extract it over several months from uranium ore. The first attempt at creating livermorium was from 1976 to 1977 at the LBNL, who bombarded curium-248 with calcium-48, but were not successful. After several failed attempts in 1977, 1998, and 1999 by research groups in Russia, Germany, and the US, livermorium was created successfully in 2000 at the Joint Institute for Nuclear Research by bombarding curium-248 atoms with calcium-48 atoms. The element was known as ununhexium until it was officially named livermorium in 2012. Names and etymology In the 19th century, Jons Jacob Berzelius suggested calling the elements in group 16 "amphigens", as the elements in the group formed amphid salts (salts of oxyacids, formerly regarded as composed of two oxides, an acid and a basic oxide). The term received some use in the early 1800s but is now obsolete. The name chalcogen comes from the Greek words (, literally "copper"), and (, born, gender, kindle). It was first used in 1932 by Wilhelm Biltz's group at Leibniz University Hannover, where it was proposed by Werner Fischer. The word "chalcogen" gained popularity in Germany during the 1930s because the term was analogous to "halogen". Although the literal meanings of the modern Greek words imply that chalcogen means "copper-former", this is misleading because the chalcogens have nothing to do with copper in particular. "Ore-former" has been suggested as a better translation, as the vast majority of metal ores are chalcogenides and the word in ancient Greek was associated with metals and metal-bearing rock in general; copper, and its alloy bronze, was one of the first metals to be used by humans. Oxygen's name comes from the Greek words oxy genes, meaning "acid-forming". Sulfur's name comes from either the Latin word or the Sanskrit word ; both of those terms are ancient words for sulfur. Selenium is named after the Greek goddess of the moon, Selene, to match the previously discovered element tellurium, whose name comes from the Latin word , meaning earth. Polonium is named after Marie Curie's country of birth, Poland. Livermorium is named for the Lawrence Livermore National Laboratory. Occurrence The four lightest chalcogens (oxygen, sulfur, selenium, and tellurium) are all primordial elements on Earth. Sulfur and oxygen occur as constituent copper ores and selenium and tellurium occur in small traces in such ores. Polonium forms naturally from the decay of other elements, even though it is not primordial. Livermorium does not occur naturally at all. Oxygen makes up 21% of the atmosphere by weight, 89% of water by weight, 46% of the Earth's crust by weight, and 65% of the human body. Oxygen also occurs in many minerals, being found in all oxide minerals and hydroxide minerals, and in numerous other mineral groups. Stars of at least eight times the mass of the Sun also produce oxygen in their cores via nuclear fusion. Oxygen is the third-most abundant element in the universe, making up 1% of the universe by weight. Sulfur makes up 0.035% of the Earth's crust by weight, making it the 17th most abundant element there and makes up 0.25% of the human body. It is a major component of soil. Sulfur makes up 870 parts per million of seawater and about 1 part per billion of the atmosphere. Sulfur can be found in elemental form or in the form of sulfide minerals, sulfate minerals, or sulfosalt minerals. Stars of at least 12 times the mass of the Sun produce sulfur in their cores via nuclear fusion. Sulfur is the tenth most abundant element in the universe, making up 500 parts per million of the universe by weight. Selenium makes up 0.05 parts per million of the Earth's crust by weight. This makes it the 67th most abundant element in the Earth's crust. Selenium makes up on average 5 parts per million of the soils. Seawater contains around 200 parts per trillion of selenium. The atmosphere contains 1 nanogram of selenium per cubic meter. There are mineral groups known as selenates and selenites, but there are not many minerals in these groups. Selenium is not produced directly by nuclear fusion. Selenium makes up 30 parts per billion of the universe by weight. There are only 5 parts per billion of tellurium in the Earth's crust and 15 parts per billion of tellurium in seawater. Tellurium is one of the eight or nine least abundant elements in the Earth's crust. There are a few dozen tellurate minerals and telluride minerals, and tellurium occurs in some minerals with gold, such as sylvanite and calaverite. Tellurium makes up 9 parts per billion of the universe by weight. Polonium only occurs in trace amounts on Earth, via radioactive decay of uranium and thorium. It is present in uranium ores in concentrations of 100 micrograms per metric ton. Very minute amounts of polonium exist in the soil and thus in most food, and thus in the human body. The Earth's crust contains less than 1 part per billion of polonium, making it one of the ten rarest metals on Earth. Livermorium is always produced artificially in particle accelerators. Even when it is produced, only a small number of atoms are synthesized at a time. Chalcophile elements Chalcophile elements are those that remain on or close to the surface because they combine readily with chalcogens other than oxygen, forming compounds which do not sink into the core. Chalcophile ("chalcogen-loving") elements in this context are those metals and heavier nonmetals that have a low affinity for oxygen and prefer to bond with the heavier chalcogen sulfur as sulfides. Because sulfide minerals are much denser than the silicate minerals formed by lithophile elements, chalcophile elements separated below the lithophiles at the time of the first crystallisation of the Earth's crust. This has led to their depletion in the Earth's crust relative to their solar abundances, though this depletion has not reached the levels found with siderophile elements. Production Approximately 100 million metric tons of oxygen are produced yearly. Oxygen is most commonly produced by fractional distillation, in which air is cooled to a liquid, then warmed, allowing all the components of air except for oxygen to turn to gases and escape. Fractionally distilling air several times can produce 99.5% pure oxygen. Another method with which oxygen is produced is to send a stream of dry, clean air through a bed of molecular sieves made of zeolite, which absorbs the nitrogen in the air, leaving 90 to 93% pure oxygen. Sulfur can be mined in its elemental form, although this method is no longer as popular as it used to be. In 1865 a large deposit of elemental sulfur was discovered in the U.S. states of Louisiana and Texas, but it was difficult to extract at the time. In the 1890s, Herman Frasch came up with the solution of liquefying the sulfur with superheated steam and pumping the sulfur up to the surface. These days sulfur is instead more often extracted from oil, natural gas, and tar. The world production of selenium is around 1500 metric tons per year, out of which roughly 10% is recycled. Japan is the largest producer, producing 800 metric tons of selenium per year. Other large producers include Belgium (300 metric tons per year), the United States (over 200 metric tons per year), Sweden (130 metric tons per year), and Russia (100 metric tons per year). Selenium can be extracted from the waste from the process of electrolytically refining copper. Another method of producing selenium is to farm selenium-gathering plants such as milk vetch. This method could produce three kilograms of selenium per acre, but is not commonly practiced. Tellurium is mostly produced as a by-product of the processing of copper. Tellurium can also be refined by electrolytic reduction of sodium telluride. The world production of tellurium is between 150 and 200 metric tons per year. The United States is one of the largest producers of tellurium, producing around 50 metric tons per year. Peru, Japan, and Canada are also large producers of tellurium. Until the creation of nuclear reactors, all polonium had to be extracted from uranium ore. In modern times, most isotopes of polonium are produced by bombarding bismuth with neutrons. Polonium can also be produced by high neutron fluxes in nuclear reactors. Approximately 100 grams of polonium are produced yearly. All the polonium produced for commercial purposes is made in the Ozersk nuclear reactor in Russia. From there, it is taken to Samara, Russia for purification, and from there to St. Petersburg for distribution. The United States is the largest consumer of polonium. All livermorium is produced artificially in particle accelerators. The first successful production of livermorium was achieved by bombarding curium-248 atoms with calcium-48 atoms. As of 2011, roughly 25 atoms of livermorium had been synthesized. Applications Metabolism is the most important source and use of oxygen. Minor industrial uses include Steelmaking (55% of all purified oxygen produced), the chemical industry (25% of all purified oxygen), medical use, water treatment (as oxygen kills some types of bacteria), rocket fuel (in liquid form), and metal cutting. Most sulfur produced is transformed into sulfur dioxide, which is further transformed into sulfuric acid, a very common industrial chemical. Other common uses include being a key ingredient of gunpowder and Greek fire, and being used to change soil pH. Sulfur is also mixed into rubber to vulcanize it. Sulfur is used in some types of concrete and fireworks. 60% of all sulfuric acid produced is used to generate phosphoric acid. Sulfur is used as a pesticide (specifically as an acaricide and fungicide) on "orchard, ornamental, vegetable, grain, and other crops." Around 40% of all selenium produced goes to glassmaking. 30% of all selenium produced goes to metallurgy, including manganese production. 15% of all selenium produced goes to agriculture. Electronics such as photovoltaic materials claim 10% of all selenium produced. Pigments account for 5% of all selenium produced. Historically, machines such as photocopiers and light meters used one-third of all selenium produced, but this application is in steady decline. Tellurium suboxide, a mixture of tellurium and tellurium dioxide, is used in the rewritable data layer of some CD-RW disks and DVD-RW disks. Bismuth telluride is also used in many microelectronic devices, such as photoreceptors. Tellurium is sometimes used as an alternative to sulfur in vulcanized rubber. Cadmium telluride is used as a high-efficiency material in solar panels. Some of polonium's applications relate to the element's radioactivity. For instance, polonium is used as an alpha-particle generator for research. Polonium alloyed with beryllium provides an efficient neutron source. Polonium is also used in nuclear batteries. Most polonium is used in antistatic devices. Livermorium does not have any uses whatsoever due to its extreme rarity and short half-life. Organochalcogen compounds are involved in the semiconductor process. These compounds also feature into ligand chemistry and biochemistry. One application of chalcogens themselves is to manipulate redox couples in supramolecular chemistry (chemistry involving non-covalent bond interactions). This application leads on to such applications as crystal packing, assembly of large molecules, and biological recognition of patterns. The secondary bonding interactions of the larger chalcogens, selenium and tellurium, can create organic solvent-holding acetylene nanotubes. Chalcogen interactions are useful for conformational analysis and stereoelectronic effects, among other things. Chalcogenides with through bonds also have applications. For instance, divalent sulfur can stabilize carbanions, cationic centers, and radical. Chalcogens can confer upon ligands (such as DCTO) properties such as being able to transform Cu(II) to Cu(I). Studying chalcogen interactions gives access to radical cations, which are used in mainstream synthetic chemistry. Metallic redox centers of biological importance are tunable by interactions of ligands containing chalcogens, such as methionine and selenocysteine. Also, chalcogen through-bonds can provide insight about the process of electron transfer. Biological role Oxygen is needed by almost all organisms for the purpose of generating ATP. It is also a key component of most other biological compounds, such as water, amino acids and DNA. Human blood contains a large amount of oxygen. Human bones contain 28% oxygen. Human tissue contains 16% oxygen. A typical 70-kilogram human contains 43 kilograms of oxygen, mostly in the form of water. All animals need significant amounts of sulfur. Some amino acids, such as cysteine and methionine contain sulfur. Plant roots take up sulfate ions from the soil and reduce it to sulfide ions. Metalloproteins also use sulfur to attach to useful metal atoms in the body and sulfur similarly attaches itself to poisonous metal atoms like cadmium to haul them to the safety of the liver. On average, humans consume 900 milligrams of sulfur each day. Sulfur compounds, such as those found in skunk spray often have strong odors. All animals and some plants need trace amounts of selenium, but only for some specialized enzymes. Humans consume on average between 6 and 200 micrograms of selenium per day. Mushrooms and brazil nuts are especially noted for their high selenium content. Selenium in foods is most commonly found in the form of amino acids such as selenocysteine and selenomethionine. Selenium can protect against heavy metal poisoning. Tellurium is not known to be needed for animal life, although a few fungi can incorporate it in compounds in place of selenium. Microorganisms also absorb tellurium and emit dimethyl telluride. Most tellurium in the blood stream is excreted slowly in urine, but some is converted to dimethyl telluride and released through the lungs. On average, humans ingest about 600 micrograms of tellurium daily. Plants can take up some tellurium from the soil. Onions and garlic have been found to contain as much as 300 parts per million of tellurium in dry weight. Polonium has no biological role, and is highly toxic on account of being radioactive. Toxicity Oxygen is generally nontoxic, but oxygen toxicity has been reported when it is used in high concentrations. In both elemental gaseous form and as a component of water, it is vital to almost all life on Earth. Despite this, liquid oxygen is highly dangerous. Even gaseous oxygen is dangerous in excess. For instance, sports divers have occasionally drowned from convulsions caused by breathing pure oxygen at a depth of more than underwater. Oxygen is also toxic to some bacteria. Ozone, an allotrope of oxygen, is toxic to most life. It can cause lesions in the respiratory tract. Sulfur is generally nontoxic and is even a vital nutrient for humans. However, in its elemental form it can cause redness in the eyes and skin, a burning sensation and a cough if inhaled, a burning sensation and diarrhoea and/or catharsis if ingested, and can irritate the mucous membranes. An excess of sulfur can be toxic for cows because microbes in the rumens of cows produce toxic hydrogen sulfide upon reaction with sulfur. Many sulfur compounds, such as hydrogen sulfide (H2S) and sulfur dioxide (SO2) are highly toxic. Selenium is a trace nutrient required by humans on the order of tens or hundreds of micrograms per day. A dose of over 450 micrograms can be toxic, resulting in bad breath and body odor. Extended, low-level exposure, which can occur at some industries, results in weight loss, anemia, and dermatitis. In many cases of selenium poisoning, selenous acid is formed in the body. Hydrogen selenide (H2Se) is highly toxic. Exposure to tellurium can produce unpleasant side effects. As little as 10 micrograms of tellurium per cubic meter of air can cause notoriously unpleasant breath, described as smelling like rotten garlic. Acute tellurium poisoning can cause vomiting, gut inflammation, internal bleeding, and respiratory failure. Extended, low-level exposure to tellurium causes tiredness and indigestion. Sodium tellurite (Na2TeO3) is lethal in amounts of around 2 grams. Polonium is dangerous as an alpha particle emitter. If ingested, polonium-210 is a million times as toxic as hydrogen cyanide by weight; it has been used as a murder weapon in the past, most famously to kill Alexander Litvinenko. Polonium poisoning can cause nausea, vomiting, anorexia, and lymphopenia. It can also damage hair follicles and white blood cells. Polonium-210 is only dangerous if ingested or inhaled because its alpha particle emissions cannot penetrate human skin. Polonium-209 is also toxic, and can cause leukemia. Amphid salts Amphid salts was a name given by Jons Jacob Berzelius in the 19th century for chemical salts derived from the 16th group of the periodic table which included oxygen, sulfur, selenium, and tellurium. The term received some use in the early 1800s but is now obsolete. The current term in use for the 16th group is chalcogens. See also Chalcogenide Gold chalcogenides Halogen Interchalcogen Pnictogen References External links Periodic table Groups (periodic table)
Chalcogen
[ "Chemistry" ]
9,678
[ "Periodic table", "Groups (periodic table)" ]
5,906
https://en.wikipedia.org/wiki/Carbon%20dioxide
Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature and at normally-encountered concentrations it is odorless. As the source of carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.042% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.028%. Burning fossil fuels is the main cause of these increased concentrations, which are the primary cause of climate change. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological features. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result in the being released back into the atmosphere. is eventually sequestered (stored for the long term) in rocks and organic deposits like coal, petroleum and natural gas. Nearly all produced by humans goes into the atmosphere. Less than 1% of produced annually is put to commercial use, mostly in the fertilizer industry and in the oil and gas industry for enhanced oil recovery. Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses. Chemical and physical properties Structure, bonding and molecular vibrations The symmetry of a carbon dioxide molecule is linear and centrosymmetric at its equilibrium geometry. The length of the carbon–oxygen bond in carbon dioxide is 116.3 pm, noticeably shorter than the roughly 140 pm length of a typical single C–O bond, and shorter than most other C–O multiply bonded functional groups such as carbonyls. Since it is centrosymmetric, the molecule has no electric dipole moment. As a linear triatomic molecule, has four vibrational modes as shown in the diagram. In the symmetric and the antisymmetric stretching modes, the atoms move along the axis of the molecule. There are two bending modes, which are degenerate, meaning that they have the same frequency and same energy, because of the symmetry of the molecule. When a molecule touches a surface or touches another molecule, the two bending modes can differ in frequency because the interaction is different for the two modes. Some of the vibrational modes are observed in the infrared (IR) spectrum: the antisymmetric stretching mode at wavenumber 2349 cm−1 (wavelength 4.25 μm) and the degenerate pair of bending modes at 667 cm−1 (wavelength 15.0 μm). The symmetric stretching mode does not create an electric dipole so is not observed in IR spectroscopy, but it is detected in Raman spectroscopy at 1388 cm−1 (wavelength 7.20 μm), with a Fermi resonance doublet at 1285 cm−1. In the gas phase, carbon dioxide molecules undergo significant vibrational motions and do not keep a fixed structure. However, in a Coulomb explosion imaging experiment, an instantaneous image of the molecular structure can be deduced. Such an experiment has been performed for carbon dioxide. The result of this experiment, and the conclusion of theoretical calculations based on an ab initio potential energy surface of the molecule, is that none of the molecules in the gas phase are ever exactly linear. This counter-intuitive result is trivially due to the fact that the nuclear motion volume element vanishes for linear geometries. This is so for all molecules except diatomic molecules. In aqueous solution Carbon dioxide is soluble in water, in which it reversibly forms (carbonic acid), which is a weak acid, because its ionization in water is incomplete. The hydration equilibrium constant of carbonic acid is, at 25 °C: Hence, the majority of the carbon dioxide is not converted into carbonic acid, but remains as molecules, not affecting the pH. The relative concentrations of , , and the deprotonated forms (bicarbonate) and (carbonate) depend on the pH. As shown in a Bjerrum plot, in neutral or slightly alkaline water (pH > 6.5), the bicarbonate form predominates (>50%) becoming the most prevalent (>95%) at the pH of seawater. In very alkaline water (pH > 10.4), the predominant (>50%) form is carbonate. The oceans, being mildly alkaline with typical pH = 8.2–8.5, contain about 120 mg of bicarbonate per liter. Being diprotic, carbonic acid has two acid dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion (): Ka1 = 2.5 × 10−4 mol/L; pKa1 = 3.6 at 25 °C. This is the true first acid dissociation constant, defined as where the denominator includes only covalently bound and does not include hydrated (aq). The much smaller and often-quoted value near 4.16 × 10−7 (or pKa1 = 6.38) is an apparent value calculated on the (incorrect) assumption that all dissolved is present as carbonic acid, so that Since most of the dissolved remains as molecules, Ka1(apparent) has a much larger denominator and a much smaller value than the true Ka1. The bicarbonate ion is an amphoteric species that can act as an acid or as a base, depending on pH of the solution. At high pH, it dissociates significantly into the carbonate ion (): Ka2 = 4.69 × 10−11 mol/L; pKa2 = 10.329 In organisms, carbonic acid production is catalysed by the enzyme known as carbonic anhydrase. In addition to altering its acidity, the presence of carbon dioxide in water also affects its electrical properties. When carbon dioxide dissolves in desalinated water, the electrical conductivity increases significantly from below 1 μS/cm to nearly 30 μS/cm. When heated, the water begins to gradually lose the conductivity induced by the presence of , especially noticeable as temperatures exceed 30 °C. The temperature dependence of the electrical conductivity of fully deionized water without saturation is comparably low in relation to these data. Chemical reactions is a potent electrophile having an electrophilic reactivity that is comparable to benzaldehyde or strongly electrophilic α,β-unsaturated carbonyl compounds. However, unlike electrophiles of similar reactivity, the reactions of nucleophiles with are thermodynamically less favored and are often found to be highly reversible. The reversible reaction of carbon dioxide with amines to make carbamates is used in scrubbers and has been suggested as a possible starting point for carbon capture and storage by amine gas treating. Only very strong nucleophiles, like the carbanions provided by Grignard reagents and organolithium compounds react with to give carboxylates: where M = Li or MgBr and R = alkyl or aryl. In metal carbon dioxide complexes, serves as a ligand, which can facilitate the conversion of to other chemicals. The reduction of to CO is ordinarily a difficult and slow reaction: The redox potential for this reaction near pH 7 is about −0.53 V versus the standard hydrogen electrode. The nickel-containing enzyme carbon monoxide dehydrogenase catalyses this process. Photoautotrophs (i.e. plants and cyanobacteria) use the energy contained in sunlight to photosynthesize simple sugars from absorbed from the air and water: Physical properties Carbon dioxide is colorless. At low concentrations, the gas is odorless; however, at sufficiently high concentrations, it has a sharp, acidic odor. At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m3, about 1.53 times that of air. Carbon dioxide has no liquid state at pressures below 0.51795(10) MPa (5.11177(99) atm). At a pressure of 1 atm (0.101325 MPa), the gas deposits directly to a solid at temperatures below 194.6855(30) K (−78.4645(30) °C) and the solid sublimes directly to a gas above this temperature. In its solid state, carbon dioxide is commonly called dry ice. Liquid carbon dioxide forms only at pressures above 0.51795(10) MPa (5.11177(99) atm); the triple point of carbon dioxide is 216.592(3) K (−56.558(3) °C) at 0.51795(10) MPa (5.11177(99) atm) (see phase diagram). The critical point is 304.128(15) K (30.978(15) °C) at 7.3773(30) MPa (72.808(30) atm). Another form of solid carbon dioxide observed at high pressure is an amorphous glass-like solid. This form of glass, called carbonia, is produced by supercooling heated at extreme pressures (40–48 GPa, or about 400,000 atmospheres) in a diamond anvil. This discovery confirmed the theory that carbon dioxide could exist in a glass state similar to other members of its elemental family, like silicon dioxide (silica glass) and germanium dioxide. Unlike silica and germania glasses, however, carbonia glass is not stable at normal pressures and reverts to gas when pressure is released. At temperatures and pressures above the critical point, carbon dioxide behaves as a supercritical fluid known as supercritical carbon dioxide. Table of thermal and physical properties of saturated liquid carbon dioxide: Table of thermal and physical properties of carbon dioxide () at atmospheric pressure: Biological role Carbon dioxide is an end product of cellular respiration in organisms that obtain energy by breaking down sugars, fats and amino acids with oxygen as part of their metabolism. This includes all plants, algae and animals and aerobic fungi and bacteria. In vertebrates, the carbon dioxide travels in the blood from the body's tissues to the skin (e.g., amphibians) or the gills (e.g., fish), from where it dissolves in the water, or to the lungs from where it is exhaled. During active photosynthesis, plants can absorb more carbon dioxide from the atmosphere than they release in respiration. Photosynthesis and carbon fixation Carbon fixation is a biochemical process by which atmospheric carbon dioxide is incorporated by plants, algae and cyanobacteria into energy-rich organic molecules such as glucose, thus creating their own food by photosynthesis. Photosynthesis uses carbon dioxide and water to produce sugars from which other organic compounds can be constructed, and oxygen is produced as a by-product. Ribulose-1,5-bisphosphate carboxylase oxygenase, commonly abbreviated to RuBisCO, is the enzyme involved in the first major step of carbon fixation, the production of two molecules of 3-phosphoglycerate from and ribulose bisphosphate, as shown in the diagram at left. RuBisCO is thought to be the single most abundant protein on Earth. Phototrophs use the products of their photosynthesis as internal food sources and as raw material for the biosynthesis of more complex organic molecules, such as polysaccharides, nucleic acids, and proteins. These are used for their own growth, and also as the basis of the food chains and webs that feed other organisms, including animals such as ourselves. Some important phototrophs, the coccolithophores synthesise hard calcium carbonate scales. A globally significant species of coccolithophore is Emiliania huxleyi whose calcite scales have formed the basis of many sedimentary rocks such as limestone, where what was previously atmospheric carbon can remain fixed for geological timescales. Plants can grow as much as 50% faster in concentrations of 1,000 ppm when compared with ambient conditions, though this assumes no change in climate and no limitation on other nutrients. Elevated levels cause increased growth reflected in the harvestable yield of crops, with wheat, rice and soybean all showing increases in yield of 12–14% under elevated in FACE experiments. Increased atmospheric concentrations result in fewer stomata developing on plants which leads to reduced water usage and increased water-use efficiency. Studies using FACE have shown that enrichment leads to decreased concentrations of micronutrients in crop plants. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein. The concentration of secondary metabolites such as phenylpropanoids and flavonoids can also be altered in plants exposed to high concentrations of . Plants also emit during respiration, and so the majority of plants and algae, which use C3 photosynthesis, are only net absorbers during the day. Though a growing forest will absorb many tons of each year, a mature forest will produce as much from respiration and decomposition of dead specimens (e.g., fallen branches) as is used in photosynthesis in growing plants. Contrary to the long-standing view that they are carbon neutral, mature forests can continue to accumulate carbon and remain valuable carbon sinks, helping to maintain the carbon balance of Earth's atmosphere. Additionally, and crucially to life on earth, photosynthesis by phytoplankton consumes dissolved in the upper ocean and thereby promotes the absorption of from the atmosphere. Toxicity Carbon dioxide content in fresh air (averaged between sea-level and 10 kPa level, i.e., about altitude) varies between 0.036% (360 ppm) and 0.041% (412 ppm), depending on the location. In humans, exposure to CO2 at concentrations greater than 5% causes the development of hypercapnia and respiratory acidosis. Concentrations of 7% to 10% (70,000 to 100,000 ppm) may cause suffocation, even in the presence of sufficient oxygen, manifesting as dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. Concentrations of more than 10% may cause convulsions, coma, and death. CO2 levels of more than 30% act rapidly leading to loss of consciousness in seconds. Because it is heavier than air, in locations where the gas seeps from the ground (due to sub-surface volcanic or geothermal activity) in relatively high concentrations, without the dispersing effects of wind, it can collect in sheltered/pocketed locations below average ground level, causing animals located therein to be suffocated. Carrion feeders attracted to the carcasses are then also killed. Children have been killed in the same way near the city of Goma by emissions from the nearby volcano Mount Nyiragongo. The Swahili term for this phenomenon is . Adaptation to increased concentrations of occurs in humans, including modified breathing and kidney bicarbonate production, in order to balance the effects of blood acidification (acidosis). Several studies suggested that 2.0 percent inspired concentrations could be used for closed air spaces (e.g. a submarine) since the adaptation is physiological and reversible, as deterioration in performance or in normal physical activity does not happen at this level of exposure for five days. Yet, other studies show a decrease in cognitive function even at much lower levels. Also, with ongoing respiratory acidosis, adaptation or compensatory mechanisms will be unable to reverse the condition. Below 1% There are few studies of the health effects of long-term continuous exposure on humans and animals at levels below 1%. Occupational exposure limits have been set in the United States at 0.5% (5000 ppm) for an eight-hour period. At this concentration, International Space Station crew experienced headaches, lethargy, mental slowness, emotional irritation, and sleep disruption. Studies in animals at 0.5% have demonstrated kidney calcification and bone loss after eight weeks of exposure. A study of humans exposed in 2.5 hour sessions demonstrated significant negative effects on cognitive abilities at concentrations as low as 0.1% (1000ppm) likely due to induced increases in cerebral blood flow. Another study observed a decline in basic activity level and information usage at 1000 ppm, when compared to 500 ppm. However a review of the literature found that a reliable subset of studies on the phenomenon of carbon dioxide induced cognitive impairment to only show a small effect on high-level decision making (for concentrations below 5000 ppm). Most of the studies were confounded by inadequate study designs, environmental comfort, uncertainties in exposure doses and differing cognitive assessments used. Similarly a study on the effects of the concentration of in motorcycle helmets has been criticized for having dubious methodology in not noting the self-reports of motorcycle riders and taking measurements using mannequins. Further when normal motorcycle conditions were achieved (such as highway or city speeds) or the visor was raised the concentration of declined to safe levels (0.2%). Ventilation Poor ventilation is one of the main causes of excessive concentrations in closed spaces, leading to poor indoor air quality. Carbon dioxide differential above outdoor concentrations at steady state conditions (when the occupancy and ventilation system operation are sufficiently long that concentration has stabilized) are sometimes used to estimate ventilation rates per person. Higher concentrations are associated with occupant health, comfort and performance degradation. ASHRAE Standard 62.1–2007 ventilation rates may result in indoor concentrations up to 2,100 ppm above ambient outdoor conditions. Thus if the outdoor concentration is 400 ppm, indoor concentrations may reach 2,500 ppm with ventilation rates that meet this industry consensus standard. Concentrations in poorly ventilated spaces can be found even higher than this (range of 3,000 or 4,000 ppm). Miners, who are particularly vulnerable to gas exposure due to insufficient ventilation, referred to mixtures of carbon dioxide and nitrogen as "blackdamp", "choke damp" or "stythe". Before more effective technologies were developed, miners would frequently monitor for dangerous levels of blackdamp and other gases in mine shafts by bringing a caged canary with them as they worked. The canary is more sensitive to asphyxiant gases than humans, and as it became unconscious would stop singing and fall off its perch. The Davy lamp could also detect high levels of blackdamp (which sinks, and collects near the floor) by burning less brightly, while methane, another suffocating gas and explosion risk, would make the lamp burn more brightly. In February 2020, three people died from suffocation at a party in Moscow when dry ice (frozen ) was added to a swimming pool to cool it down. A similar accident occurred in 2018 when a woman died from fumes emanating from the large amount of dry ice she was transporting in her car. Indoor air Humans spend more and more time in a confined atmosphere (around 80-90% of the time in a building or vehicle). According to the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) and various actors in France, the rate in the indoor air of buildings (linked to human or animal occupancy and the presence of combustion installations), weighted by air renewal, is "usually between about 350 and 2,500 ppm". In homes, schools, nurseries and offices, there are no systematic relationships between the levels of and other pollutants, and indoor is statistically not a good predictor of pollutants linked to outdoor road (or air, etc.) traffic. is the parameter that changes the fastest (with hygrometry and oxygen levels when humans or animals are gathered in a closed or poorly ventilated room). In poor countries, many open hearths are sources of and CO emitted directly into the living environment. Outdoor areas with elevated concentrations Local concentrations of carbon dioxide can reach high values near strong sources, especially those that are isolated by surrounding terrain. At the Bossoleto hot spring near Rapolano Terme in Tuscany, Italy, situated in a bowl-shaped depression about in diameter, concentrations of rise to above 75% overnight, sufficient to kill insects and small animals. After sunrise the gas is dispersed by convection. High concentrations of produced by disturbance of deep lake water saturated with are thought to have caused 37 fatalities at Lake Monoun, Cameroon in 1984 and 1700 casualties at Lake Nyos, Cameroon in 1986. Human physiology Content The body produces approximately of carbon dioxide per day per person, containing of carbon. In humans, this carbon dioxide is carried through the venous system and is breathed out through the lungs, resulting in lower concentrations in the arteries. The carbon dioxide content of the blood is often given as the partial pressure, which is the pressure which carbon dioxide would have had if it alone occupied the volume. In humans, the blood carbon dioxide contents are shown in the adjacent table. Transport in the blood is carried in blood in three different ways. Exact percentages vary between arterial and venous blood. Majority (about 70% to 80%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells, by the reaction: 5–10% is dissolved in blood plasma 5–10% is bound to hemoglobin as carbamino compounds Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of decreases the amount of oxygen that is bound for a given partial pressure of oxygen. This is known as the Haldane Effect, and is important in the transport of carbon dioxide from the tissues to the lungs. Conversely, a rise in the partial pressure of or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Regulation of respiration Carbon dioxide is one of the mediators of local autoregulation of blood supply. If its concentration is high, the capillaries expand to allow a greater blood flow to that tissue. Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis. Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. As a result, breathing low-pressure air or a gas mixture with no oxygen at all (such as pure nitrogen) can lead to loss of consciousness without ever experiencing air hunger. This is especially perilous for high-altitude fighter pilots. It is also why flight attendants instruct passengers, in case of loss of cabin pressure, to apply the oxygen mask to themselves first before helping others; otherwise, one risks losing consciousness. The respiratory centers try to maintain an arterial pressure of 40 mmHg. With intentional hyperventilation, the content of arterial blood may be lowered to 10–20 mmHg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving. Concentrations and role in the environment Atmosphere Oceans Ocean acidification Carbon dioxide dissolves in the ocean to form carbonic acid (), bicarbonate (), and carbonate (). There is about fifty times as much carbon dioxide dissolved in the oceans as exists in the atmosphere. The oceans act as an enormous carbon sink, and have taken up about a third of emitted by human activity. Hydrothermal vents Carbon dioxide is also introduced into the oceans through hydrothermal vents. The Champagne hydrothermal vent, found at the Northwest Eifuku volcano in the Mariana Trench, produces almost pure liquid carbon dioxide, one of only two known sites in the world as of 2004, the other being in the Okinawa Trough. The finding of a submarine lake of liquid carbon dioxide in the Okinawa Trough was reported in 2006. Sources The burning of fossil fuels for energy produces 36.8 billion tonnes of per year as of 2023. Nearly all of this goes into the atmosphere, where approximately half is subsequently absorbed into natural carbon sinks. Less than 1% of produced annually is put to commercial use. Biological processes Carbon dioxide is a by-product of the fermentation of sugar in the brewing of beer, whisky and other alcoholic beverages and in the production of bioethanol. Yeast metabolizes sugar to produce and ethanol, also known as alcohol, as follows: All aerobic organisms produce when they oxidize carbohydrates, fatty acids, and proteins. The large number of reactions involved are exceedingly complex and not described easily. Refer to cellular respiration, anaerobic respiration and photosynthesis. The equation for the respiration of glucose and other monosaccharides is: Anaerobic organisms decompose organic material producing methane and carbon dioxide together with traces of other compounds. Regardless of the type of organic material, the production of gases follows well defined kinetic pattern. Carbon dioxide comprises about 40–45% of the gas that emanates from decomposition in landfills (termed "landfill gas"). Most of the remaining 50–55% is methane. Combustion The combustion of all carbon-based fuels, such as methane (natural gas), petroleum distillates (gasoline, diesel, kerosene, propane), coal, wood and generic organic matter produces carbon dioxide and, except in the case of pure carbon, water. As an example, the chemical reaction between methane and oxygen: Iron is reduced from its oxides with coke in a blast furnace, producing pig iron and carbon dioxide: By-product from hydrogen production Carbon dioxide is a byproduct of the industrial production of hydrogen by steam reforming and the water gas shift reaction in ammonia production. These processes begin with the reaction of water and natural gas (mainly methane). Thermal decomposition of limestone It is produced by thermal decomposition of limestone, by heating (calcining) at about , in the manufacture of quicklime (calcium oxide, CaO), a compound that has many industrial uses: Acids liberate from most metal carbonates. Consequently, it may be obtained directly from natural carbon dioxide springs, where it is produced by the action of acidified water on limestone or dolomite. The reaction between hydrochloric acid and calcium carbonate (limestone or chalk) is shown below: The carbonic acid () then decomposes to water and : Such reactions are accompanied by foaming or bubbling, or both, as the gas is released. They have widespread uses in industry because they can be used to neutralize waste acid streams. Commercial uses Around 230 Mt of are used each year, mostly in the fertiliser industry for urea production (130 million tonnes) and in the oil and gas industry for enhanced oil recovery (70 to 80 million tonnes). Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses. Technology exists to capture CO2 from industrial flue gas or from the air. Research is ongoing on ways to use captured CO2 in products and some of these processes have been deployed commercially. However, the potential to use products is very small compared to the total volume of CO2 that could foreseeably be captured. The vast majority of captured CO2 is considered a waste product and sequestered in underground geologic formations. Precursor to chemicals In the chemical industry, carbon dioxide is mainly consumed as an ingredient in the production of urea, with a smaller fraction being used to produce methanol and a range of other products. Some carboxylic acid derivatives such as sodium salicylate are prepared using by the Kolbe–Schmitt reaction. Captured CO2 could be to produce methanol or electrofuels. To be carbon-neutral, the CO2 would need to come from bioenergy production or direct air capture. Fossil fuel recovery Carbon dioxide is used in enhanced oil recovery where it is injected into or adjacent to producing oil wells, usually under supercritical conditions, when it becomes miscible with the oil. This approach can increase original oil recovery by reducing residual oil saturation by 7–23% additional to primary extraction. It acts as both a pressurizing agent and, when dissolved into the underground crude oil, significantly reduces its viscosity, and changing surface chemistry enabling the oil to flow more rapidly through the reservoir to the removal well. Most CO2 injected in CO2-EOR projects comes from naturally occurring underground CO2 deposits. Some CO2 used in EOR is captured from industrial facilities such as natural gas processing plants, using carbon capture technology and transported to the oilfield in pipelines. Agriculture Plants require carbon dioxide to conduct photosynthesis. The atmospheres of greenhouses may (if of large size, must) be enriched with additional to sustain and increase the rate of plant growth. At very high concentrations (100 times atmospheric concentration, or greater), carbon dioxide can be toxic to animal life, so raising the concentration to 10,000 ppm (1%) or higher for several hours will eliminate pests such as whiteflies and spider mites in a greenhouse. Some plants respond more favorably to rising carbon dioxide concentrations than others, which can lead to vegetation regime shifts like woody plant encroachment. Foods Carbon dioxide is a food additive used as a propellant and acidity regulator in the food industry. It is approved for usage in the EU (listed as E number E290), US, Australia and New Zealand (listed by its INS number 290). A candy called Pop Rocks is pressurized with carbon dioxide gas at about . When placed in the mouth, it dissolves (just like other hard candy) and releases the gas bubbles with an audible pop. Leavening agents cause dough to rise by producing carbon dioxide. Baker's yeast produces carbon dioxide by fermentation of sugars within the dough, while chemical leaveners such as baking powder and baking soda release carbon dioxide when heated or if exposed to acids. Beverages Carbon dioxide is used to produce carbonated soft drinks and soda water. Traditionally, the carbonation of beer and sparkling wine came about through natural fermentation, but many manufacturers carbonate these drinks with carbon dioxide recovered from the fermentation process. In the case of bottled and kegged beer, the most common method used is carbonation with recycled carbon dioxide. With the exception of British real ale, draught beer is usually transferred from kegs in a cold room or cellar to dispensing taps on the bar using pressurized carbon dioxide, sometimes mixed with nitrogen. The taste of soda water (and related taste sensations in other carbonated beverages) is an effect of the dissolved carbon dioxide rather than the bursting bubbles of the gas. Carbonic anhydrase 4 converts carbon dioxide to carbonic acid leading to a sour taste, and also the dissolved carbon dioxide induces a somatosensory response. Winemaking Carbon dioxide in the form of dry ice is often used during the cold soak phase in winemaking to cool clusters of grapes quickly after picking to help prevent spontaneous fermentation by wild yeast. The main advantage of using dry ice over water ice is that it cools the grapes without adding any additional water that might decrease the sugar concentration in the grape must, and thus the alcohol concentration in the finished wine. Carbon dioxide is also used to create a hypoxic environment for carbonic maceration, the process used to produce Beaujolais wine. Carbon dioxide is sometimes used to top up wine bottles or other storage vessels such as barrels to prevent oxidation, though it has the problem that it can dissolve into the wine, making a previously still wine slightly fizzy. For this reason, other gases such as nitrogen or argon are preferred for this process by professional wine makers. Stunning animals Carbon dioxide is often used to "stun" animals before slaughter. "Stunning" may be a misnomer, as the animals are not knocked out immediately and may suffer distress. Inert gas Carbon dioxide is one of the most commonly used compressed gases for pneumatic (pressurized gas) systems in portable pressure tools. Carbon dioxide is also used as an atmosphere for welding, although in the welding arc, it reacts to oxidize most metals. Use in the automotive industry is common despite significant evidence that welds made in carbon dioxide are more brittle than those made in more inert atmospheres. When used for MIG welding, use is sometimes referred to as MAG welding, for Metal Active Gas, as can react at these high temperatures. It tends to produce a hotter puddle than truly inert atmospheres, improving the flow characteristics. Although, this may be due to atmospheric reactions occurring at the puddle site. This is usually the opposite of the desired effect when welding, as it tends to embrittle the site, but may not be a problem for general mild steel welding, where ultimate ductility is not a major concern. Carbon dioxide is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately , allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminium capsules of are also sold as supplies of compressed gas for air guns, paintball markers/guns, inflating bicycle tires, and for making carbonated water. High concentrations of carbon dioxide can also be used to kill pests. Liquid carbon dioxide is used in supercritical drying of some food products and technological materials, in the preparation of specimens for scanning electron microscopy and in the decaffeination of coffee beans. Fire extinguisher Carbon dioxide can be used to extinguish flames by flooding the environment around the flame with the gas. It does not itself react to extinguish the flame, but starves the flame of oxygen by displacing it. Some fire extinguishers, especially those designed for electrical fires, contain liquid carbon dioxide under pressure. Carbon dioxide extinguishers work well on small flammable liquid and electrical fires, but not on ordinary combustible fires, because they do not cool the burning substances significantly, and when the carbon dioxide disperses, they can catch fire upon exposure to atmospheric oxygen. They are mainly used in server rooms. Carbon dioxide has also been widely used as an extinguishing agent in fixed fire-protection systems for local application of specific hazards and total flooding of a protected space. International Maritime Organization standards recognize carbon dioxide systems for fire protection of ship holds and engine rooms. Carbon dioxide-based fire-protection systems have been linked to several deaths, because it can cause suffocation in sufficiently high concentrations. A review of systems identified 51 incidents between 1975 and the date of the report (2000), causing 72 deaths and 145 injuries. Supercritical as solvent Liquid carbon dioxide is a good solvent for many lipophilic organic compounds and is used to decaffeinate coffee. Carbon dioxide has attracted attention in the pharmaceutical and other chemical processing industries as a less toxic alternative to more traditional solvents such as organochlorides. It is also used by some dry cleaners for this reason. It is used in the preparation of some aerogels because of the properties of supercritical carbon dioxide. Refrigerant Liquid and solid carbon dioxide are important refrigerants, especially in the food industry, where they are employed during the transportation and storage of ice cream and other frozen foods. Solid carbon dioxide is called "dry ice" and is used for small shipments where refrigeration equipment is not practical. Solid carbon dioxide is always below at regular atmospheric pressure, regardless of the air temperature. Liquid carbon dioxide (industry nomenclature R744 or R-744) was used as a refrigerant prior to the use of dichlorodifluoromethane (R12, a chlorofluorocarbon (CFC) compound). might enjoy a renaissance because one of the main substitutes to CFCs, 1,1,1,2-tetrafluoroethane (R134a, a hydrofluorocarbon (HFC) compound) contributes to climate change more than does. physical properties are highly favorable for cooling, refrigeration, and heating purposes, having a high volumetric cooling capacity. Due to the need to operate at pressures of up to , systems require highly mechanically resistant reservoirs and components that have already been developed for mass production in many sectors. In automobile air conditioning, in more than 90% of all driving conditions for latitudes higher than 50°, (R744) operates more efficiently than systems using HFCs (e.g., R134a). Its environmental advantages (GWP of 1, non-ozone depleting, non-toxic, non-flammable) could make it the future working fluid to replace current HFCs in cars, supermarkets, and heat pump water heaters, among others. Coca-Cola has fielded -based beverage coolers and the U.S. Army is interested in refrigeration and heating technology. Minor uses Carbon dioxide is the lasing medium in a carbon-dioxide laser, which is one of the earliest type of lasers. Carbon dioxide can be used as a means of controlling the pH of swimming pools, by continuously adding gas to the water, thus keeping the pH from rising. Among the advantages of this is the avoidance of handling (more hazardous) acids. Similarly, it is also used in the maintaining reef aquaria, where it is commonly used in calcium reactors to temporarily lower the pH of water being passed over calcium carbonate in order to allow the calcium carbonate to dissolve into the water more freely, where it is used by some corals to build their skeleton. Used as the primary coolant in the British advanced gas-cooled reactor for nuclear power generation. Carbon dioxide induction is commonly used for the euthanasia of laboratory research animals. Methods to administer include placing animals directly into a closed, prefilled chamber containing , or exposure to a gradually increasing concentration of . The American Veterinary Medical Association's 2020 guidelines for carbon dioxide induction state that a displacement rate of 30–70% of the chamber or cage volume per minute is optimal for the humane euthanasia of small rodents. Percentages of vary for different species, based on identified optimal percentages to minimize distress. Carbon dioxide is also used in several related cleaning and surface-preparation techniques. History of discovery Carbon dioxide was the first gas to be described as a discrete substance. In about 1640, the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal had been transmuted into an invisible substance he termed a "gas" (from Greek "chaos") or "wild spirit" (spiritus sylvestris). The properties of carbon dioxide were further studied in the 1750s by the Scottish physician Joseph Black. He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed air". He observed that the fixed air was denser than air and supported neither flame nor animal life. Black also found that when bubbled through limewater (a saturated aqueous solution of calcium hydroxide), it would precipitate calcium carbonate. He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial fermentation. In 1772, English chemist Joseph Priestley published a paper entitled Impregnating Water with Fixed Air in which he described a process of dripping sulfuric acid (or oil of vitriol as Priestley knew it) on chalk in order to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas. Carbon dioxide was first liquefied (at elevated pressures) in 1823 by Humphry Davy and Michael Faraday. The earliest description of solid carbon dioxide (dry ice) was given by the French inventor Adrien-Jean-Pierre Thilorier, who in 1835 opened a pressurized container of liquid carbon dioxide, only to find that the cooling produced by the rapid evaporation of the liquid yielded a "snow" of solid . Carbon dioxide in combination with nitrogen was known from earlier times as Blackdamp, stythe or choke damp. Along with the other types of damp it was encountered in mining operations and well sinking. Slow oxidation of coal and biological processes replaced the oxygen to create a suffocating mixture of nitrogen and carbon dioxide. See also (from the atmosphere) (early work on and climate change) List of countries by carbon dioxide emissions List of least carbon efficient power stations NASA's Notes References External links Current global map of carbon dioxide concentration CDC – NIOSH Pocket Guide to Chemical Hazards – Carbon Dioxide Trends in Atmospheric Carbon Dioxide (NOAA) The rediscovery of CO2: History, What is Shecco? - as refrigerant Acid anhydrides Acidic oxides Coolants Fire suppression agents Greenhouse gases Household chemicals Inorganic solvents Laser gain media Nuclear reactor coolants Oxocarbons Propellants Refrigerants Gaseous signaling molecules E-number additives Triatomic molecules
Carbon dioxide
[ "Physics", "Chemistry", "Environmental_science" ]
8,921
[ "Molecules", "Environmental chemistry", "Signal transduction", "Gaseous signaling molecules", "Triatomic molecules", "Greenhouse gases", "Carbon dioxide", "Matter" ]
5,910
https://en.wikipedia.org/wiki/Cyanide
In chemistry, cyanide () is a chemical compound that contains a functional group. This group, known as the cyano group, consists of a carbon atom triple-bonded to a nitrogen atom. In inorganic cyanides, the cyanide group is present as the cyanide anion . This anion is extremely poisonous. Soluble salts such as sodium cyanide (NaCN) and potassium cyanide (KCN) are highly toxic. Hydrocyanic acid, also known as hydrogen cyanide, or HCN, is a highly volatile liquid that is produced on a large scale industrially. It is obtained by acidification of cyanide salts. Organic cyanides are usually called nitriles. In nitriles, the group is linked by a single covalent bond to carbon. For example, in acetonitrile (), the cyanide group is bonded to methyl (). Although nitriles generally do not release cyanide ions, the cyanohydrins do and are thus toxic. Bonding The cyanide ion is isoelectronic with carbon monoxide and with molecular nitrogen N≡N. A triple bond exists between C and N. The negative charge is concentrated on carbon C. Occurrence In nature Cyanides are produced by certain bacteria, fungi, and algae. It is an antifeedant in a number of plants. Cyanides are found in substantial amounts in certain seeds and fruit stones, e.g., those of bitter almonds, apricots, apples, and peaches. Chemical compounds that can release cyanide are known as cyanogenic compounds. In plants, cyanides are usually bound to sugar molecules in the form of cyanogenic glycosides and defend the plant against herbivores. Cassava roots (also called manioc), an important potato-like food grown in tropical countries (and the base from which tapioca is made), also contain cyanogenic glycosides. The Madagascar bamboo Cathariostachys madagascariensis produces cyanide as a deterrent to grazing. In response, the golden bamboo lemur, which eats the bamboo, has developed a high tolerance to cyanide. The hydrogenase enzymes contain cyanide ligands attached to iron in their active sites. The biosynthesis of cyanide in the NiFe hydrogenases proceeds from carbamoyl phosphate, which converts to cysteinyl thiocyanate, the donor. Interstellar medium The cyanide radical •CN has been identified in interstellar space. Cyanogen, , is used to measure the temperature of interstellar gas clouds. Pyrolysis and combustion product Hydrogen cyanide is produced by the combustion or pyrolysis of certain materials under oxygen-deficient conditions. For example, it can be detected in the exhaust of internal combustion engines and tobacco smoke. Certain plastics, especially those derived from acrylonitrile, release hydrogen cyanide when heated or burnt. Organic derivatives In IUPAC nomenclature, organic compounds that have a functional group are called nitriles. An example of a nitrile is acetonitrile, . Nitriles usually do not release cyanide ions. A functional group with a hydroxyl and cyanide bonded to the same carbon atom is called cyanohydrin (). Unlike nitriles, cyanohydrins do release poisonous hydrogen cyanide. Reactions Protonation Cyanide is basic. The pKa of hydrogen cyanide is 9.21. Thus, addition of acids stronger than hydrogen cyanide to solutions of cyanide salts releases hydrogen cyanide. Hydrolysis Cyanide is unstable in water, but the reaction is slow until about 170 °C. It undergoes hydrolysis to give ammonia and formate, which are far less toxic than cyanide: Cyanide hydrolase is an enzyme that catalyzes this reaction. Alkylation Because of the cyanide anion's high nucleophilicity, cyano groups are readily introduced into organic molecules by displacement of a halide group (e.g., the chloride on methyl chloride). In general, organic cyanides are called nitriles. In organic synthesis, cyanide is a C-1 synthon; i.e., it can be used to lengthen a carbon chain by one, while retaining the ability to be functionalized. Redox The cyanide ion is a reductant and is oxidized by strong oxidizing agents such as molecular chlorine (), hypochlorite (), and hydrogen peroxide (). These oxidizers are used to destroy cyanides in effluents from gold mining. Metal complexation The cyanide anion reacts with transition metals to form M-CN bonds. This reaction is the basis of cyanide's toxicity. The high affinities of metals for this anion can be attributed to its negative charge, compactness, and ability to engage in π-bonding. Among the most important cyanide coordination compounds are the potassium ferrocyanide and the pigment Prussian blue, which are both essentially nontoxic due to the tight binding of the cyanides to a central iron atom. Prussian blue was first accidentally made around 1706, by heating substances containing iron and carbon and nitrogen, and other cyanides made subsequently (and named after it). Among its many uses, Prussian blue gives the blue color to blueprints, bluing, and cyanotypes. Manufacture The principal process used to manufacture cyanides is the Andrussow process in which gaseous hydrogen cyanide is produced from methane and ammonia in the presence of oxygen and a platinum catalyst. Sodium cyanide, the precursor to most cyanides, is produced by treating hydrogen cyanide with sodium hydroxide: Toxicity Among the most toxic cyanides are hydrogen cyanide (), sodium cyanide (), potassium cyanide (), and calcium cyanide (). The cyanide anion is an inhibitor of the enzyme cytochrome c oxidase (also known as aa3), the fourth complex of the electron transport chain found in the inner membrane of the mitochondria of eukaryotic cells. It attaches to the iron within this protein. The binding of cyanide to this enzyme prevents transport of electrons from cytochrome c to oxygen. As a result, the electron transport chain is disrupted, meaning that the cell can no longer aerobically produce ATP for energy. Tissues that depend highly on aerobic respiration, such as the central nervous system and the heart, are particularly affected. This is an example of histotoxic hypoxia. The most hazardous compound is hydrogen cyanide, which is a gas and kills by inhalation. For this reason, working with hydrogen cyanide requires wearing an air respirator supplied by an external oxygen source. Hydrogen cyanide is produced by adding acid to a solution containing a cyanide salt. Alkaline solutions of cyanide are safer to use because they do not evolve hydrogen cyanide gas. Hydrogen cyanide may be produced in the combustion of polyurethanes; for this reason, polyurethanes are not recommended for use in domestic and aircraft furniture. Oral ingestion of a small quantity of solid cyanide or a cyanide solution of as little as 200 mg, or exposure to airborne cyanide of 270 ppm, is sufficient to cause death within minutes. Organic nitriles do not readily release cyanide ions, and so have low toxicities. By contrast, compounds such as trimethylsilyl cyanide readily release HCN or the cyanide ion upon contact with water. Antidote Hydroxocobalamin reacts with cyanide to form cyanocobalamin, which can be safely eliminated by the kidneys. This method has the advantage of avoiding the formation of methemoglobin (see below). This antidote kit is sold under the brand name Cyanokit and was approved by the U.S. FDA in 2006. An older cyanide antidote kit included administration of three substances: amyl nitrite pearls (administered by inhalation), sodium nitrite, and sodium thiosulfate. The goal of the antidote was to generate a large pool of ferric iron () to compete for cyanide with cytochrome a3 (so that cyanide will bind to the antidote rather than the enzyme). The nitrites oxidize hemoglobin to methemoglobin, which competes with cytochrome oxidase for the cyanide ion. Cyanmethemoglobin is formed and the cytochrome oxidase enzyme is restored. The major mechanism to remove the cyanide from the body is by enzymatic conversion to thiocyanate by the mitochondrial enzyme rhodanese. Thiocyanate is a relatively non-toxic molecule and is excreted by the kidneys. To accelerate this detoxification, sodium thiosulfate is administered to provide a sulfur donor for rhodanese, needed in order to produce thiocyanate. Sensitivity Minimum risk levels (MRLs) may not protect for delayed health effects or health effects acquired following repeated sublethal exposure, such as hypersensitivity, asthma, or bronchitis. MRLs may be revised after sufficient data accumulates. Applications Mining Cyanide is mainly produced for the mining of silver and gold: It helps dissolve these metals allowing separation from the other solids. In the cyanide process, finely ground high-grade ore is mixed with the cyanide (at a ratio of about 1:500 parts NaCN to ore); low-grade ores are stacked into heaps and sprayed with a cyanide solution (at a ratio of about 1:1000 parts NaCN to ore). The precious metals are complexed by the cyanide anions to form soluble derivatives, e.g., (dicyanoargentate(I)) and (dicyanoaurate(I)). Silver is less "noble" than gold and often occurs as the sulfide, in which case redox is not invoked (no is required). Instead, a displacement reaction occurs: Ag2S + 4 NaCN + H2O -> 2 Na[Ag(CN)2] + NaSH + NaOH 4 Au + 8 NaCN + O2 + 2 H2O -> 4 Na[Au(CN)2] + 4 NaOH The "pregnant liquor" containing these ions is separated from the solids, which are discarded to a tailing pond or spent heap, the recoverable gold having been removed. The metal is recovered from the "pregnant solution" by reduction with zinc dust or by adsorption onto activated carbon. This process can result in environmental and health problems. A number of environmental disasters have followed the overflow of tailing ponds at gold mines. Cyanide contamination of waterways has resulted in numerous cases of human and aquatic species mortality. Aqueous cyanide is hydrolyzed rapidly, especially in sunlight. It can mobilize some heavy metals such as mercury if present. Gold can also be associated with arsenopyrite (FeAsS), which is similar to iron pyrite (fool's gold), wherein half of the sulfur atoms are replaced by arsenic. Gold-containing arsenopyrite ores are similarly reactive toward inorganic cyanide. Industrial organic chemistry The second major application of alkali metal cyanides (after mining) is in the production of CN-containing compounds, usually nitriles. Acyl cyanides are produced from acyl chlorides and cyanide. Cyanogen, cyanogen chloride, and the trimer cyanuric chloride are derived from alkali metal cyanides. Medical uses The cyanide compound sodium nitroprusside is used mainly in clinical chemistry to measure urine ketone bodies mainly as a follow-up to diabetic patients. On occasion, it is used in emergency medical situations to produce a rapid decrease in blood pressure in humans; it is also used as a vasodilator in vascular research. The cobalt in artificial vitamin B12 contains a cyanide ligand as an artifact of the purification process; this must be removed by the body before the vitamin molecule can be activated for biochemical use. During World War I, a copper cyanide compound was briefly used by Japanese physicians for the treatment of tuberculosis and leprosy. Illegal fishing and poaching Cyanides are illegally used to capture live fish near coral reefs for the aquarium and seafood markets. The practice is controversial, dangerous, and damaging but is driven by the lucrative exotic fish market. Poachers in Africa have been known to use cyanide to poison waterholes, to kill elephants for their ivory. Pest control M44 cyanide devices are used in the United States to kill coyotes and other canids. Cyanide is also used for pest control in New Zealand, particularly for possums, an introduced marsupial that threatens the conservation of native species and spreads tuberculosis amongst cattle. Possums can become bait shy but the use of pellets containing the cyanide reduces bait shyness. Cyanide has been known to kill native birds, including the endangered kiwi. Cyanide is also effective for controlling the dama wallaby, another introduced marsupial pest in New Zealand. A licence is required to store, handle and use cyanide in New Zealand. Cyanides are used as insecticides for fumigating ships. Cyanide salts are used for killing ants, and have in some places been used as rat poison (the less toxic poison arsenic is more common). Niche uses Potassium ferrocyanide is used to achieve a blue color on cast bronze sculptures during the final finishing stage of the sculpture. On its own, it will produce a very dark shade of blue and is often mixed with other chemicals to achieve the desired tint and hue. It is applied using a torch and paint brush while wearing the standard safety equipment used for any patina application: rubber gloves, safety glasses, and a respirator. The actual amount of cyanide in the mixture varies according to the recipes used by each foundry. Cyanide is also used in jewelry-making and certain kinds of photography such as sepia toning. Although usually thought to be toxic, cyanide and cyanohydrins increase germination in various plant species. Human poisoning Deliberate cyanide poisoning of humans has occurred many times throughout history. Common salts such as sodium cyanide are involatile but water-soluble, so are poisonous by ingestion. Hydrogen cyanide is a gas, making it more indiscriminately dangerous, however it is lighter than air and rapidly disperses up into the atmosphere, which makes it ineffective as a chemical weapon. Food additive Because of the high stability of their complexation with iron, ferrocyanides (Sodium ferrocyanide E535, Potassium ferrocyanide E536, and Calcium ferrocyanide E538) do not decompose to lethal levels in the human body and are used in the food industry as, e.g., an anticaking agent in table salt. Chemical tests for cyanide Cyanide is quantified by potentiometric titration, a method widely used in gold mining. It can also be determined by titration with silver ion. Some analyses begin with an air-purge of an acidified boiling solution, sweeping the vapors into a basic absorber solution. The cyanide salt absorbed in the basic solution is then analyzed. Qualitative tests Because of the notorious toxicity of cyanide, many methods have been investigated. Benzidine gives a blue coloration in the presence of ferricyanide. Iron(II) sulfate added to a solution of cyanide, such as the filtrate from the sodium fusion test, gives prussian blue. A solution of para-benzoquinone in DMSO reacts with inorganic cyanide to form a cyanophenol, which is fluorescent. Illumination with a UV light gives a green/blue glow if the test is positive. References External links ATSDR medical management guidelines for cyanide poisoning (US) HSE recommendations for first aid treatment of cyanide poisoning (UK) Hydrogen cyanide and cyanides (CICAD 61) IPCS/CEC Evaluation of antidotes for poisoning by cyanides National Pollutant Inventory – Cyanide compounds fact sheet Eating apple seeds is safe despite the small amount of cyanide Toxicological Profile for Cyanide, U.S. Department of Health and Human Services, July 2006 Safety data (French) Institut national de recherche et de sécurité (1997). "Cyanure d'hydrogène et solutions aqueuses". Fiche toxicologique n° 4, Paris: INRS, 5 pp. (PDF file, ) Institut national de recherche et de sécurité (1997). "Cyanure de sodium. Cyanure de potassium". Fiche toxicologique n° 111, Paris: INRS, 6 pp. (PDF file, ) Cyanides Anions Blood agents Mitochondrial toxins Nitrogen(−III) compounds Toxicology
Cyanide
[ "Physics", "Chemistry", "Environmental_science" ]
3,671
[ "Matter", "Toxicology", "Anions", "Chemical weapons", "Blood agents", "Ions" ]
5,914
https://en.wikipedia.org/wiki/Catalysis
Catalysis () is the increase in rate of a chemical reaction due to an added substance known as a catalyst (). Catalysts are not consumed by the reaction and remain unchanged after it. If the reaction is rapid and the catalyst recycles quickly, very small amounts of catalyst often suffice; mixing, surface area, and temperature are important factors in reaction rate. Catalysts generally react with one or more reactants to form intermediates that subsequently give the final reaction product, in the process of regenerating the catalyst. The rate increase occurs because the catalyst allows the reaction to occur by an alternative mechanism which may be much faster than the non-catalyzed mechanism. However the non-catalyzed mechanism does remain possible, so that the total rate (catalyzed plus non-catalyzed) can only increase in the presence of the catalyst and never decrease. Catalysis may be classified as either homogeneous, whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant, or heterogeneous, whose components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category. Catalysis is ubiquitous in chemical industry of all kinds. Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture. The term "catalyst" is derived from Greek , kataluein, meaning "loosen" or "untie". The concept of catalysis was invented by chemist Elizabeth Fulhame, based on her novel work in oxidation-reduction experiments. General principles Example An illustrative example is the effect of catalysts to speed the decomposition of hydrogen peroxide into water and oxygen: 2 HO → 2 HO + O This reaction proceeds because the reaction products are more stable than the starting compound, but this decomposition is so slow that hydrogen peroxide solutions are commercially available. In the presence of a catalyst such as manganese dioxide this reaction proceeds much more rapidly. This effect is readily seen by the effervescence of oxygen. The catalyst is not consumed in the reaction, and may be recovered unchanged and re-used indefinitely. Accordingly, manganese dioxide is said to catalyze this reaction. In living organisms, this reaction is catalyzed by enzymes (proteins that serve as catalysts) such as catalase. Another example is the effect of catalysts on air pollution and reducing the amount of carbon monoxide. Development of active and selective catalysts for the conversion of carbon monoxide into desirable products is one of the most important roles of catalysts. Using catalysts for hydrogenation of carbon monoxide helps to remove this toxic gas and also attain useful materials. Units The SI derived unit for measuring the catalytic activity of a catalyst is the katal, which is quantified in moles per second. The productivity of a catalyst can be described by the turnover number (or TON) and the catalytic activity by the turn over frequency (TOF), which is the TON per time unit. The biochemical equivalent is the enzyme unit. For more information on the efficiency of enzymatic catalysis, see the article on enzymes. Catalytic reaction mechanisms In general, chemical reactions occur faster in the presence of a catalyst because the catalyst provides an alternative reaction mechanism (reaction pathway) having a lower activation energy than the non-catalyzed mechanism. In catalyzed mechanisms, the catalyst is regenerated. As a simple example occurring in the gas phase, the reaction 2 SO2 + O2 → 2 SO3 can be catalyzed by adding nitric oxide. The reaction occurs in two steps: 2NO + O2 → 2NO2 (rate-determining) NO2 + SO2 → NO + SO3 (fast) The NO catalyst is regenerated. The overall rate is the rate of the slow step v=2k1[NO]2[O2]. An example of heterogeneous catalysis is the reaction of oxygen and hydrogen on the surface of titanium dioxide (TiO, or titania) to produce water. Scanning tunneling microscopy showed that the molecules undergo adsorption and dissociation. The dissociated, surface-bound O and H atoms diffuse together. The intermediate reaction states are: HO, HO, then HO and the reaction product (water molecule dimers), after which the water molecule desorbs from the catalyst surface. Reaction energetics Catalysts enable pathways that differ from the uncatalyzed reactions. These pathways have lower activation energy. Consequently, more molecular collisions have the energy needed to reach the transition state. Hence, catalysts can enable reactions that would otherwise be blocked or slowed by a kinetic barrier. The catalyst may increase the reaction rate or selectivity, or enable the reaction at lower temperatures. This effect can be illustrated with an energy profile diagram. In the catalyzed elementary reaction, catalysts do not change the extent of a reaction: they have no effect on the chemical equilibrium of a reaction. The ratio of the forward and the reverse reaction rates is unaffected (see also thermodynamics). The second law of thermodynamics describes why a catalyst does not change the chemical equilibrium of a reaction. Suppose there was such a catalyst that shifted an equilibrium. Introducing the catalyst to the system would result in a reaction to move to the new equilibrium, producing energy. Production of energy is a necessary result since reactions are spontaneous only if Gibbs free energy is produced, and if there is no energy barrier, there is no need for a catalyst. Then, removing the catalyst would also result in a reaction, producing energy; i.e. the addition and its reverse process, removal, would both produce energy. Thus, a catalyst that could change the equilibrium would be a perpetual motion machine, a contradiction to the laws of thermodynamics. Thus, catalysts do not alter the equilibrium constant. (A catalyst can however change the equilibrium concentrations by reacting in a subsequent step. It is then consumed as the reaction proceeds, and thus it is also a reactant. Illustrative is the base-catalyzed hydrolysis of esters, where the produced carboxylic acid immediately reacts with the base catalyst and thus the reaction equilibrium is shifted towards hydrolysis.) The catalyst stabilizes the transition state more than it stabilizes the starting material. It decreases the kinetic barrier by decreasing the difference in energy between starting material and the transition state. It does not change the energy difference between starting materials and products (thermodynamic barrier), or the available energy (this is provided by the environment as heat or light). Related concepts Some so-called catalysts are really precatalysts. Precatalysts convert to catalysts in the reaction. For example, Wilkinson's catalyst RhCl(PPh) loses one triphenylphosphine ligand before entering the true catalytic cycle. Precatalysts are easier to store but are easily activated in situ. Because of this preactivation step, many catalytic reactions involve an induction period. In cooperative catalysis, chemical species that improve catalytic activity are called cocatalysts or promoters. In tandem catalysis two or more different catalysts are coupled in a one-pot reaction. In autocatalysis, the catalyst is a product of the overall reaction, in contrast to all other types of catalysis considered in this article. The simplest example of autocatalysis is a reaction of type A + B → 2 B, in one or in several steps. The overall reaction is just A → B, so that B is a product. But since B is also a reactant, it may be present in the rate equation and affect the reaction rate. As the reaction proceeds, the concentration of B increases and can accelerate the reaction as a catalyst. In effect, the reaction accelerates itself or is autocatalyzed. An example is the hydrolysis of an ester such as aspirin to a carboxylic acid and an alcohol. In the absence of added acid catalysts, the carboxylic acid product catalyzes the hydrolysis. Switchable catalysis refers to a type of catalysis where the catalyst can be toggled between different ground states possessing distinct reactivity, typically by applying an external stimulus. This ability to reversibly switch the catalyst allows for spatiotemporal control over catalytic activity and selectivity. The external stimuli used to switch the catalyst can include changes in temperature, pH, light, electric fields, or the addition of chemical agents. A true catalyst can work in tandem with a sacrificial catalyst. The true catalyst is consumed in the elementary reaction and turned into a deactivated form. The sacrificial catalyst regenerates the true catalyst for another cycle. The sacrificial catalyst is consumed in the reaction, and as such, it is not really a catalyst, but a reagent. For example, osmium tetroxide (OsO4) is a good reagent for dihydroxylation, but it is highly toxic and expensive. In Upjohn dihydroxylation, the sacrificial catalyst N-methylmorpholine N-oxide (NMMO) regenerates OsO4, and only catalytic quantities of OsO4 are needed. Classification Catalysis may be classified as either homogeneous or heterogeneous. A homogeneous catalysis is one whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant's molecules. A heterogeneous catalysis is one where the reaction components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category. Similar mechanistic principles apply to heterogeneous, homogeneous, and biocatalysis. Heterogeneous catalysis Heterogeneous catalysts act in a different phase than the reactants. Most heterogeneous catalysts are solids that act on substrates in a liquid or gaseous reaction mixture. Important heterogeneous catalysts include zeolites, alumina, higher-order oxides, graphitic carbon, transition metal oxides, metals such as Raney nickel for hydrogenation, and vanadium(V) oxide for oxidation of sulfur dioxide into sulfur trioxide by the contact process. Diverse mechanisms for reactions on surfaces are known, depending on how the adsorption takes place (Langmuir-Hinshelwood, Eley-Rideal, and Mars-van Krevelen). The total surface area of a solid has an important effect on the reaction rate. The smaller the catalyst particle size, the larger the surface area for a given mass of particles. A heterogeneous catalyst has active sites, which are the atoms or crystal faces where the substrate actually binds. Active sites are atoms but are often described as a facet (edge, surface, step, etc.) of a solid. Most of the volume but also most of the surface of a heterogeneous catalyst may be catalytically inactive. Finding out the nature of the active site is technically challenging. For example, the catalyst for the Haber process for the synthesis of ammonia from nitrogen and hydrogen is often described as iron. But detailed studies and many optimizations have led to catalysts that are mixtures of iron-potassium-calcium-aluminum-oxide. The reacting gases adsorb onto active sites on the iron particles. Once physically adsorbed, the reagents partially or wholly dissociate and form new bonds. In this way the particularly strong triple bond in nitrogen is broken, which would be extremely uncommon in the gas phase due to its high activation energy. Thus, the activation energy of the overall reaction is lowered, and the rate of reaction increases. Another place where a heterogeneous catalyst is applied is in the oxidation of sulfur dioxide on vanadium(V) oxide for the production of sulfuric acid. Many heterogeneous catalysts are in fact nanomaterials. Heterogeneous catalysts are typically "supported", which means that the catalyst is dispersed on a second material that enhances the effectiveness or minimizes its cost. Supports prevent or minimize agglomeration and sintering of small catalyst particles, exposing more surface area, thus catalysts have a higher specific activity (per gram) on support. Sometimes the support is merely a surface on which the catalyst is spread to increase the surface area. More often, the support and the catalyst interact, affecting the catalytic reaction. Supports can also be used in nanoparticle synthesis by providing sites for individual molecules of catalyst to chemically bind. Supports are porous materials with a high surface area, most commonly alumina, zeolites, or various kinds of activated carbon. Specialized supports include silicon dioxide, titanium dioxide, calcium carbonate, and barium sulfate. Electrocatalysts In the context of electrochemistry, specifically in fuel cell engineering, various metal-containing catalysts are used to enhance the rates of the half reactions that comprise the fuel cell. One common type of fuel cell electrocatalyst is based upon nanoparticles of platinum that are supported on slightly larger carbon particles. When in contact with one of the electrodes in a fuel cell, this platinum increases the rate of oxygen reduction either to water or to hydroxide or hydrogen peroxide. Homogeneous catalysis Homogeneous catalysts function in the same phase as the reactants. Typically homogeneous catalysts are dissolved in a solvent with the substrates. One example of homogeneous catalysis involves the influence of H on the esterification of carboxylic acids, such as the formation of methyl acetate from acetic acid and methanol. High-volume processes requiring a homogeneous catalyst include hydroformylation, hydrosilylation, hydrocyanation. For inorganic chemists, homogeneous catalysis is often synonymous with organometallic catalysts. Many homogeneous catalysts are however not organometallic, illustrated by the use of cobalt salts that catalyze the oxidation of p-xylene to terephthalic acid. Organocatalysis Whereas transition metals sometimes attract most of the attention in the study of catalysis, small organic molecules without metals can also exhibit catalytic properties, as is apparent from the fact that many enzymes lack transition metals. Typically, organic catalysts require a higher loading (amount of catalyst per unit amount of reactant, expressed in mol% amount of substance) than transition metal(-ion)-based catalysts, but these catalysts are usually commercially available in bulk, helping to lower costs. In the early 2000s, these organocatalysts were considered "new generation" and are competitive to traditional metal(-ion)-containing catalysts. Organocatalysts are supposed to operate akin to metal-free enzymes utilizing, e.g., non-covalent interactions such as hydrogen bonding. The discipline organocatalysis is divided into the application of covalent (e.g., proline, DMAP) and non-covalent (e.g., thiourea organocatalysis) organocatalysts referring to the preferred catalyst-substrate binding and interaction, respectively. The Nobel Prize in Chemistry 2021 was awarded jointly to Benjamin List and David W.C. MacMillan "for the development of asymmetric organocatalysis." Photocatalysts Photocatalysis is the phenomenon where the catalyst can receive light to generate an excited state that effect redox reactions. Singlet oxygen is usually produced by photocatalysis. Photocatalysts are components of dye-sensitized solar cells. Enzymes and biocatalysts In biology, enzymes are protein-based catalysts in metabolism and catabolism. Most biocatalysts are enzymes, but other non-protein-based classes of biomolecules also exhibit catalytic properties including ribozymes, and synthetic deoxyribozymes. Biocatalysts can be thought of as an intermediate between homogeneous and heterogeneous catalysts, although strictly speaking soluble enzymes are homogeneous catalysts and membrane-bound enzymes are heterogeneous. Several factors affect the activity of enzymes (and other catalysts) including temperature, pH, the concentration of enzymes, substrate, and products. A particularly important reagent in enzymatic reactions is water, which is the product of many bond-forming reactions and a reactant in many bond-breaking processes. In biocatalysis, enzymes are employed to prepare many commodity chemicals including high-fructose corn syrup and acrylamide. Some monoclonal antibodies whose binding target is a stable molecule that resembles the transition state of a chemical reaction can function as weak catalysts for that chemical reaction by lowering its activation energy. Such catalytic antibodies are sometimes called "abzymes". Significance Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture. In 2005, catalytic processes generated about $900 billion in products worldwide. Catalysis is so pervasive that subareas are not readily classified. Some areas of particular concentration are surveyed below. Energy processing Petroleum refining makes intensive use of catalysis for alkylation, catalytic cracking (breaking long-chain hydrocarbons into smaller pieces), naphtha reforming and steam reforming (conversion of hydrocarbons into synthesis gas). Even the exhaust from the burning of fossil fuels is treated via catalysis: Catalytic converters, typically composed of platinum and rhodium, break down some of the more harmful byproducts of automobile exhaust. 2 CO + 2 NO → 2 CO + N With regard to synthetic fuels, an old but still important process is the Fischer–Tropsch synthesis of hydrocarbons from synthesis gas, which itself is processed via water-gas shift reactions, catalyzed by iron. The Sabatier reaction produces methane from carbon dioxide and hydrogen. Biodiesel and related biofuels require processing via both inorganic and biocatalysts. Fuel cells rely on catalysts for both the anodic and cathodic reactions. Catalytic heaters generate flameless heat from a supply of combustible fuel. Bulk chemicals Some of the largest-scale chemicals are produced via catalytic oxidation, often using oxygen. Examples include nitric acid (from ammonia), sulfuric acid (from sulfur dioxide to sulfur trioxide by the contact process), terephthalic acid from p-xylene, acrylic acid from propylene or propane and acrylonitrile from propane and ammonia. The production of ammonia is one of the largest-scale and most energy-intensive processes. In the Haber process nitrogen is combined with hydrogen over an iron oxide catalyst. Methanol is prepared from carbon monoxide or carbon dioxide but using copper-zinc catalysts. Bulk polymers derived from ethylene and propylene are often prepared using Ziegler–Natta catalyst. Polyesters, polyamides, and isocyanates are derived via acid–base catalysis. Most carbonylation processes require metal catalysts, examples include the Monsanto acetic acid process and hydroformylation. Fine chemicals Many fine chemicals are prepared via catalysis; methods include those of heavy industry as well as more specialized processes that would be prohibitively expensive on a large scale. Examples include the Heck reaction, and Friedel–Crafts reactions. Because most bioactive compounds are chiral, many pharmaceuticals are produced by enantioselective catalysis (catalytic asymmetric synthesis). (R)-1,2-Propandiol, the precursor to the antibacterial levofloxacin, can be synthesized efficiently from hydroxyacetone by using catalysts based on BINAP-ruthenium complexes, in Noyori asymmetric hydrogenation: Food processing One of the most obvious applications of catalysis is the hydrogenation (reaction with hydrogen gas) of fats using nickel catalyst to produce margarine. Many other foodstuffs are prepared via biocatalysis (see below). Environment Catalysis affects the environment by increasing the efficiency of industrial processes, but catalysis also plays a direct role in the environment. A notable example is the catalytic role of chlorine free radicals in the breakdown of ozone. These radicals are formed by the action of ultraviolet radiation on chlorofluorocarbons (CFCs). Cl + O → ClO + O ClO + O → Cl + O History The term "catalyst", broadly defined as anything that increases the rate of a process, is derived from Greek καταλύειν, meaning "to annul", or "to untie", or "to pick up". The concept of catalysis was invented by chemist Elizabeth Fulhame and described in a 1794 book, based on her novel work in oxidation–reduction reactions. The first chemical reaction in organic chemistry that knowingly used a catalyst was studied in 1811 by Gottlieb Kirchhoff, who discovered the acid-catalyzed conversion of starch to glucose. The term catalysis was later used by Jöns Jakob Berzelius in 1835 to describe reactions that are accelerated by substances that remain unchanged after the reaction. Fulhame, who predated Berzelius, did work with water as opposed to metals in her reduction experiments. Other 18th century chemists who worked in catalysis were Eilhard Mitscherlich who referred to it as contact processes, and Johann Wolfgang Döbereiner who spoke of contact action. He developed Döbereiner's lamp, a lighter based on hydrogen and a platinum sponge, which became a commercial success in the 1820s that lives on today. Humphry Davy discovered the use of platinum in catalysis. In the 1880s, Wilhelm Ostwald at Leipzig University started a systematic investigation into reactions that were catalyzed by the presence of acids and bases, and found that chemical reactions occur at finite rates and that these rates can be used to determine the strengths of acids and bases. For this work, Ostwald was awarded the 1909 Nobel Prize in Chemistry. Vladimir Ipatieff performed some of the earliest industrial scale reactions, including the discovery and commercialization of oligomerization and the development of catalysts for hydrogenation. Inhibitors, poisons, and promoters An added substance that lowers the rate is called a reaction inhibitor if reversible and catalyst poisons if irreversible. Promoters are substances that increase the catalytic activity, even though they are not catalysts by themselves. Inhibitors are sometimes referred to as "negative catalysts" since they decrease the reaction rate. However the term inhibitor is preferred since they do not work by introducing a reaction path with higher activation energy; this would not lower the rate since the reaction would continue to occur by the non-catalyzed path. Instead, they act either by deactivating catalysts or by removing reaction intermediates such as free radicals. In heterogeneous catalysis, coking inhibits the catalyst, which becomes covered by polymeric side products. The inhibitor may modify selectivity in addition to rate. For instance, in the hydrogenation of alkynes to alkenes, a palladium (Pd) catalyst partly "poisoned" with lead(II) acetate (Pb(CHCO)) can be used (Lindlar catalyst). Without the deactivation of the catalyst, the alkene produced would be further hydrogenated to alkane. The inhibitor can produce this effect by, e.g., selectively poisoning only certain types of active sites. Another mechanism is the modification of surface geometry. For instance, in hydrogenation operations, large planes of metal surface function as sites of hydrogenolysis catalysis while sites catalyzing hydrogenation of unsaturates are smaller. Thus, a poison that covers the surface randomly will tend to lower the number of uncontaminated large planes but leave proportionally smaller sites free, thus changing the hydrogenation vs. hydrogenolysis selectivity. Many other mechanisms are also possible. Promoters can cover up the surface to prevent the production of a mat of coke, or even actively remove such material (e.g., rhenium on platinum in platforming). They can aid the dispersion of the catalytic material or bind to reagents. See also References External links Science Aid: Catalysts Page for high school level science W.A. Herrmann Technische Universität presentation Alumite Catalyst, Kameyama-Sakurai Laboratory, Japan Inorganic Chemistry and Catalysis Group, Utrecht University, The Netherlands Centre for Surface Chemistry and Catalysis CarboCat Laboratory, University of Concepcion, Chile NSF CENTC, Center for Enabling New Technologies (through catalysis) "Bubbles turn on chemical catalysts", Science News, April 6, 2009. Catalysis Chemical kinetics Articles containing video clips
Catalysis
[ "Chemistry" ]
5,147
[ "Catalysis", "Chemical kinetics", "Chemical reaction engineering" ]
5,918
https://en.wikipedia.org/wiki/Continuum%20mechanics
Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles. Continuum mechanics deals with deformable bodies, as opposed to rigid bodies. A continuum model assumes that the substance of the object completely fills the space it occupies. While ignoring the fact that matter is made of atoms, this provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships. Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics. Concept of a continuum The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body that can be continually sub-divided into infinitesimal elements with local material properties defined at any particular point. Properties of the bulk material can therefore be described by continuous functions, and their evolution can be studied using the mathematics of calculus. Apart from the assumption of continuity, two other independent assumptions are often employed in the study of continuum mechanics. These are homogeneity (assumption of identical properties at all locations) and isotropy (assumption of directionally invariant vector properties). If these auxiliary assumptions are not globally applicable, the material may be segregated into sections where they are applicable in order to simplify the analysis. For more complex cases, one or both of these assumptions can be dropped. In these cases, computational methods are often used to solve the differential equations describing the evolution of material properties. Major areas An additional area of continuum mechanics comprises elastomeric foams, which exhibit a curious hyperbolic stress-strain relationship. The elastomer is a true continuum, but a homogeneous distribution of voids gives it unusual properties. Formulation of models Continuum mechanics models begin by assigning a region in three-dimensional Euclidean space to the material body being modeled. The points within this region are called particles or material points. Different configurations or states of the body correspond to different regions in Euclidean space. The region corresponding to the body's configuration at time is labeled . A particular particle within the body in a particular configuration is characterized by a position vector where are the coordinate vectors in some frame of reference chosen for the problem (See figure 1). This vector can be expressed as a function of the particle position in some reference configuration, for example the configuration at the initial time, so that This function needs to have various properties so that the model makes physical sense. needs to be: continuous in time, so that the body changes in a way which is realistic, globally invertible at all times, so that the body cannot intersect itself, orientation-preserving, as transformations which produce mirror reflections are not possible in nature. For the mathematical formulation of the model, is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated. Forces in a continuum A solid is a deformable body that possesses shear strength, sc. a solid can support shear forces (forces parallel to the material surface on which they act). Fluids, on the other hand, do not sustain shear forces. Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as: Surface forces Surface forces or contact forces, expressed as force per unit area, can act either on the bounding surface of the body, as a result of mechanical contact with other bodies, or on imaginary internal surfaces that bound portions of the body, as a result of the mechanical interaction between the parts of the body to either side of the surface (Euler-Cauchy's stress principle). When a body is acted upon by external contact forces, internal contact forces are then transmitted from point to point inside the body to balance their action, according to Newton's third law of motion of conservation of linear momentum and angular momentum (for continuous bodies these laws are called the Euler's equations of motion). The internal contact forces are related to the body's deformation through constitutive equations. The internal contact forces may be mathematically described by how they relate to the motion of the body, independent of the body's material makeup. The distribution of internal contact forces throughout the volume of the body is assumed to be continuous. Therefore, there exists a contact force density or Cauchy traction field that represents this distribution in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector . Any differential area with normal vector of a given internal surface area , bounding a portion of the body, experiences a contact force arising from the contact between both portions of the body on each side of , and it is given by where is the surface traction, also called stress vector, traction, or traction vector. The stress vector is a frame-indifferent vector (see Euler-Cauchy's stress principle). The total contact force on the particular internal surface is then expressed as the sum (surface integral) of the contact forces on all differential surfaces : In continuum mechanics a body is considered stress-free if the only forces present are those inter-atomic forces (ionic, metallic, and van der Waals forces) required to hold the body together and to keep its shape in the absence of all external influences, including gravitational attraction. Stresses generated during manufacture of the body to a specific configuration are also excluded when considering stresses in a body. Therefore, the stresses considered in continuum mechanics are only those produced by deformation of the body, sc. only relative changes in stress are considered, not the absolute values of stress. Body forces Body forces are forces originating from sources outside of the body that act on the volume (or mass) of the body. Saying that body forces are due to outside sources implies that the interaction between different parts of the body (internal forces) are manifested through the contact forces alone. These forces arise from the presence of the body in force fields, e.g. gravitational field (gravitational forces) or electromagnetic field (electromagnetic forces), or from inertial forces when bodies are in motion. As the mass of a continuous body is assumed to be continuously distributed, any force originating from the mass is also continuously distributed. Thus, body forces are specified by vector fields which are assumed to be continuous over the entire volume of the body, i.e. acting on every point in it. Body forces are represented by a body force density (per unit of mass), which is a frame-indifferent vector field. In the case of gravitational forces, the intensity of the force depends on, or is proportional to, the mass density of the material, and it is specified in terms of force per unit mass () or per unit volume (). These two specifications are related through the material density by the equation . Similarly, the intensity of electromagnetic forces depends upon the strength (electric charge) of the electromagnetic field. The total body force applied to a continuous body is expressed as Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque about the origin is given by In certain situations, not commonly considered in the analysis of the mechanical behavior of materials, it becomes necessary to include two other types of forces: these are couple stresses (surface couples, contact torques) and body moments. Couple stresses are moments per unit area applied on a surface. Body moments, or body couples, are moments per unit volume or per unit mass applied to the volume of the body. Both are important in the analysis of stress for a polarized dielectric solid under the action of an electric field, materials where the molecular structure is taken into consideration (e.g. bones), solids under the action of an external magnetic field, and the dislocation theory of metals. Materials that exhibit body couples and couple stresses in addition to moments produced exclusively by forces are called polar materials. Non-polar materials are then those materials with only moments of forces. In the classical branches of continuum mechanics the development of the theory of stresses is based on non-polar materials. Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given by Kinematics: motion and deformation A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 2). The motion of a continuum body is a continuous time sequence of displacements. Thus, the material body will occupy different configurations at different times so that a particle occupies a series of points in space which describe a path line. There is continuity during motion or deformation of a continuum body in the sense that: The material points forming a closed curve at any instant will always form a closed curve at any subsequent time. The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within. It is convenient to identify a reference configuration or initial condition which all subsequent configurations are referenced from. The reference configuration need not be one that the body will ever occupy. Often, the configuration at is considered the reference configuration, . The components of the position vector of a particle, taken with respect to the reference configuration, are called the material or reference coordinates. When analyzing the motion or deformation of solids, or the flow of fluids, it is necessary to describe the sequence or evolution of configurations throughout time. One description for motion is made in terms of the material or referential coordinates, called material description or Lagrangian description. Lagrangian description In the Lagrangian description the position and physical properties of the particles are described in terms of the material or referential coordinates and time. In this case the reference configuration is the configuration at . An observer standing in the frame of reference observes the changes in the position and physical properties as the material body moves in space as time progresses. The results obtained are independent of the choice of initial time and reference configuration, . This description is normally used in solid mechanics. In the Lagrangian description, the motion of a continuum body is expressed by the mapping function (Figure 2), which is a mapping of the initial configuration onto the current configuration , giving a geometrical correspondence between them, i.e. giving the position vector that a particle , with a position vector in the undeformed or reference configuration , will occupy in the current or deformed configuration at time . The components are called the spatial coordinates. Physical and kinematic properties , i.e. thermodynamic properties and flow velocity, which describe or characterize features of the material body, are expressed as continuous functions of position and time, i.e. . The material derivative of any property of a continuum, which may be a scalar, vector, or tensor, is the time rate of change of that property for a specific group of particles of the moving continuum body. The material derivative is also known as the substantial derivative, or comoving derivative, or convective derivative. It can be thought as the rate at which the property changes when measured by an observer traveling with that group of particles. In the Lagrangian description, the material derivative of is simply the partial derivative with respect to time, and the position vector is held constant as it does not change with time. Thus, we have The instantaneous position is a property of a particle, and its material derivative is the instantaneous flow velocity of the particle. Therefore, the flow velocity field of the continuum is given by Similarly, the acceleration field is given by Continuity in the Lagrangian description is expressed by the spatial and temporal continuity of the mapping from the reference configuration to the current configuration of the material points. All physical quantities characterizing the continuum are described this way. In this sense, the function and are single-valued and continuous, with continuous derivatives with respect to space and time to whatever order is required, usually to the second or third. Eulerian description Continuity allows for the inverse of to trace backwards where the particle currently located at was located in the initial or referenced configuration . In this case the description of motion is made in terms of the spatial coordinates, in which case is called the spatial description or Eulerian description, i.e. the current configuration is taken as the reference configuration. The Eulerian description, introduced by d'Alembert, focuses on the current configuration , giving attention to what is occurring at a fixed point in space as time progresses, instead of giving attention to individual particles as they move through space and time. This approach is conveniently applied in the study of fluid flow where the kinematic property of greatest interest is the rate at which change is taking place rather than the shape of the body of fluid at a reference time. Mathematically, the motion of a continuum using the Eulerian description is expressed by the mapping function which provides a tracing of the particle which now occupies the position in the current configuration to its original position in the initial configuration . A necessary and sufficient condition for this inverse function to exist is that the determinant of the Jacobian matrix, often referred to simply as the Jacobian, should be different from zero. Thus, In the Eulerian description, the physical properties are expressed as where the functional form of in the Lagrangian description is not the same as the form of in the Eulerian description. The material derivative of , using the chain rule, is then The first term on the right-hand side of this equation gives the local rate of change of the property occurring at position . The second term of the right-hand side is the convective rate of change and expresses the contribution of the particle changing position in space (motion). Continuity in the Eulerian description is expressed by the spatial and temporal continuity and continuous differentiability of the flow velocity field. All physical quantities are defined this way at each instant of time, in the current configuration, as a function of the vector position . Displacement field The vector joining the positions of a particle in the undeformed configuration and deformed configuration is called the displacement vector , in the Lagrangian description, or , in the Eulerian description. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as or in terms of the spatial coordinates as where are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus and the relationship between and is then given by Knowing that then It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in , and the direction cosines become Kronecker deltas, i.e. Thus, we have or in terms of the spatial coordinates as Governing equations Continuum mechanics deals with the behavior of materials that can be approximated as continuous for certain length and time scales. The equations that govern the mechanics of such materials include the balance laws for mass, momentum, and energy. Kinematic relations and constitutive equations are needed to complete the system of governing equations. Physical restrictions on the form of the constitutive relations can be applied by requiring that the second law of thermodynamics be satisfied under all conditions. In the continuum mechanics of solids, the second law of thermodynamics is satisfied if the Clausius–Duhem form of the entropy inequality is satisfied. The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes: the physical quantity itself flows through the surface that bounds the volume, there is a source of the physical quantity on the surface of the volume, or/and, there is a source of the physical quantity inside the volume. Let be the body (an open subset of Euclidean space) and let be its surface (the boundary of ). Let the motion of material points in the body be described by the map where is the position of a point in the initial configuration and is the location of the same point in the deformed configuration. The deformation gradient is given by Balance laws Let be a physical quantity that is flowing through the body. Let be sources on the surface of the body and let be sources inside the body. Let be the outward unit normal to the surface . Let be the flow velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface is moving be (in the direction ). Then, balance laws can be expressed in the general form The functions , , and can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. If there are internal boundaries in the body, jump discontinuities also need to be specified in the balance laws. If we take the Eulerian point of view, it can be shown that the balance laws of mass, momentum, and energy for a solid can be written as (assuming the source term is zero for the mass and angular momentum equations) In the above equations is the mass density (current), is the material time derivative of , is the particle velocity, is the material time derivative of , is the Cauchy stress tensor, is the body force density, is the internal energy per unit mass, is the material time derivative of , is the heat flux vector, and is an energy source per unit mass. The operators used are defined below. With respect to the reference configuration (the Lagrangian point of view), the balance laws can be written as In the above, is the first Piola-Kirchhoff stress tensor, and is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by We can alternatively define the nominal stress tensor which is the transpose of the first Piola-Kirchhoff stress tensor such that Then the balance laws become Operators The operators in the above equations are defined as where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the current configuration. Also, where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the reference configuration. The inner product is defined as Clausius–Duhem inequality The Clausius–Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved. Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, an internal mass density and an internal specific entropy (i.e. entropy per unit mass) in the region of interest. Let be such a region and let be its boundary. Then the second law of thermodynamics states that the rate of increase of in this region is greater than or equal to the sum of that supplied to (as a flux or from internal sources) and the change of the internal entropy density due to material flowing in and out of the region. Let move with a flow velocity and let particles inside have velocities . Let be the unit outward normal to the surface . Let be the density of matter in the region, be the entropy flux at the surface, and be the entropy source per unit mass. Then the entropy inequality may be written as The scalar entropy flux can be related to the vector flux at the surface by the relation . Under the assumption of incrementally isothermal conditions, we have where is the heat flux vector, is an energy source per unit mass, and is the absolute temperature of a material point at at time . We then have the Clausius–Duhem inequality in integral form: We can show that the entropy inequality may be written in differential form as In terms of the Cauchy stress and the internal energy, the Clausius–Duhem inequality may be written as Validity The validity of the continuum assumption may be verified by a theoretical analysis, in which either some clear periodicity is identified or statistical homogeneity and ergodicity of the microstructure exist. More specifically, the continuum hypothesis hinges on the concepts of a representative elementary volume and separation of scales based on the Hill–Mandel condition. This condition provides a link between an experimentalist's and a theoretician's viewpoint on constitutive equations (linear and nonlinear elastic/inelastic or coupled fields) as well as a way of spatial and statistical averaging of the microstructure. When the separation of scales does not hold, or when one wants to establish a continuum of a finer resolution than the size of the representative volume element (RVE), a statistical volume element (SVE) is employed, which results in random continuum fields. The latter then provide a micromechanics basis for stochastic finite elements (SFE). The levels of SVE and RVE link continuum mechanics to statistical mechanics. Experimentally, the RVE can only be evaluated when the constitutive response is spatially homogenous. Applications Continuum mechanics Solid mechanics Fluid mechanics Engineering Civil engineering Mechanical engineering Aerospace engineering Biomedical engineering Chemical engineering See also Transport phenomena Bernoulli's principle Cauchy elastic material Configurational mechanics Curvilinear coordinates Equation of state Finite deformation tensors Finite strain theory Hyperelastic material Lagrangian and Eulerian specification of the flow field Movable cellular automaton Peridynamics (a non-local continuum theory leading to integral equations) Stress (physics) Stress measures Tensor calculus Tensor derivative (continuum mechanics) Theory of elasticity Knudsen number Explanatory notes References Citations Works cited General references External links "Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity" by Gilles Leborgne, April 7, 2021: "Part IV Velocity-addition formula and Objectivity" Classical mechanics
Continuum mechanics
[ "Physics" ]
4,907
[ "Mechanics", "Classical mechanics", "Continuum mechanics" ]
5,926
https://en.wikipedia.org/wiki/Computation
A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms. Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. Computer science is an academic field that involves the study of computation. Introduction The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability. Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages. Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. Some examples of mathematical statements that are computable include: All statements characterised in modern programming languages, including C++, Python, and Java. All calculations carried by an electronic computer, calculator or abacus. All calculations carried out on an analytical engine. All calculations carried out on a Turing Machine. The majority of mathematical statements and calculations given in maths textbooks. Some examples of mathematical statements that are not computable include: Calculations or statements which are ill-defined, such that they cannot be unambiguously encoded into a Turing machine: ("Paul loves me twice as much as Joe"). Problem statements which do appear to be well-defined, but for which it can be proved that no Turing machine exists to solve them (such as the halting problem). The Physical process of computation Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Turing's 1937 proof, On Computable Numbers, with an Application to the Entscheidungsproblem, demonstrated that there is a formal equivalence between computable statements and particular physical systems, commonly called computers. Examples of such physical systems are: Turing machines, human mathematicians following strict rules, digital computers, mechanical computers, analog computers and others. Alternative accounts of computation The mapping account An alternative account of computation is found throughout the works of Hilary Putnam and others. Peter Godfrey-Smith has dubbed this the "simple mapping account." Gualtiero Piccinini's summary of this account states that a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the "microphysical states [of the system] mirror the state transitions between the computational states." The semantic account Philosophers such as Jerry Fodor have suggested various accounts of computation with the restriction that semantic content be a necessary condition for computation (that is, what differentiates an arbitrary physical system from a computing system is that the operands of the computation represent something). This notion attempts to prevent the logical abstraction of the mapping account of pancomputationalism, the idea that everything can be said to be computing everything. The mechanistic account Gualtiero Piccinini proposes an account of computation based on mechanical philosophy. It states that physical computing systems are types of mechanisms that, by design, perform physical computation, or the manipulation (by a functional mechanism) of a "medium-independent" vehicle according to a rule. "Medium-independence" requires that the property can be instantiated by multiple realizers and multiple mechanisms, and that the inputs and outputs of the mechanism also be multiply realizable. In short, medium-independence allows for the use of physical variables with properties other than voltage (as in typical digital computers); this is imperative in considering other types of computation, such as that which occurs in the brain or in a quantum computer. A rule, in this sense, provides a mapping among inputs, outputs, and internal states of the physical computing system. Mathematical models In the theory of computation, a diversity of mathematical models of computation has been developed. Typical mathematical models of computers are the following: State models including Turing machine, pushdown automaton, finite-state automaton, and PRAM Functional models including lambda calculus Logical models including logic programming Concurrent models including actor model and process calculi Giunti calls the models studied by computation theory computational systems, and he argues that all of them are mathematical dynamical systems with discrete time and discrete state space. He maintains that a computational system is a complex object which consists of three parts. First, a mathematical dynamical system with discrete time and discrete state space; second, a computational setup , which is made up of a theoretical part , and a real part ; third, an interpretation , which links the dynamical system with the setup . See also Computability theory Hypercomputation Computational problem Limits of computation Computationalism Notes References Theoretical computer science Computability theory
Computation
[ "Mathematics" ]
1,171
[ "Computability theory", "Applied mathematics", "Mathematical logic", "Theoretical computer science" ]
5,936
https://en.wikipedia.org/wiki/Chemical%20thermodynamics
Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the spontaneity of processes. The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics. History In 1865, the German physicist Rudolf Clausius, in his Mechanical Theory of Heat, suggested that the principles of thermochemistry, e.g. the heat evolved in combustion reactions, could be applied to the principles of thermodynamics. Building on the work of Clausius, between the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper On the Equilibrium of Heterogeneous Substances. In these papers, Gibbs showed how the first two laws of thermodynamics could be measured graphically and mathematically to determine both the thermodynamic equilibrium of chemical reactions as well as their tendencies to occur or proceed. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius and Sadi Carnot. During the early 20th century, two major publications successfully applied the principles developed by Gibbs to chemical processes and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the term free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry. Overview The primary objective of chemical thermodynamics is the establishment of a criterion for determination of the feasibility or spontaneity of a given transformation. In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes: Chemical reactions Phase changes The formation of solutions The following state functions are of primary concern in chemical thermodynamics: Internal energy (U) Enthalpy (H) Entropy (S) Gibbs free energy (G) Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions. The three laws of thermodynamics (global, unspecific forms): 1. The energy of the universe is constant. 2. In any spontaneous process, there is always an increase in entropy of the universe. 3. The entropy of a perfect crystal (well ordered) at 0 Kelvin is zero. Chemical energy Chemical energy is the energy that can be released when chemical substances undergo a transformation through a chemical reaction. Breaking and making chemical bonds involves energy release or uptake, often as heat that may be either absorbed by or evolved from the chemical system. Energy released (or absorbed) because of a reaction between chemical substances ("reactants") is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical system. It can be calculated from , the internal energy of formation of the reactant molecules related to the bond energies of the molecules under consideration, and , the internal energy of formation of the product molecules. The change in internal energy is equal to the heat change if it is measured under conditions of constant volume (at STP condition), as in a closed rigid container such as a bomb calorimeter. However, at constant pressure, as in reactions in vessels open to the atmosphere, the measured heat is usually not equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the widely tabulated enthalpies of formation are used.) A related term is the heat of combustion, which is the chemical energy released due to a combustion reaction and of interest in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized, its energy release is similar (though assessed differently than for a hydrocarbon fuel — see food energy). In chemical thermodynamics, the term used for the chemical potential energy is chemical potential, and sometimes the Gibbs-Duhem equation is used. Chemical reactions In most cases of interest in chemical thermodynamics there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy in the universe unless they are at equilibrium or are maintained at a "running equilibrium" through "quasi-static" changes by being coupled to constraining devices, such as pistons or electrodes, to deliver and receive external work. Even for homogeneous "bulk" systems, the free-energy functions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If the quantities { Ni }, the number of chemical species, are omitted from the formulae, it is impossible to describe compositional changes. Gibbs function or Gibbs Energy For an unstructured, homogeneous "bulk" system, there are still various extensive compositional variables { Ni } that G depends on, which specify the composition (the amounts of each chemical substance, expressed as the numbers of molecules present or the numbers of moles). Explicitly, For the case where only PV work is possible, a restatement of the fundamental thermodynamic relation, in which μi is the chemical potential for the i-th component in the system The expression for dG is especially useful at constant T and P, conditions, which are easy to achieve experimentally and which approximate the conditions in living creatures Chemical affinity While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components ( Ni ) can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind. Consequently, we introduce an explicit variable to represent the degree of advancement of a process, a progress variable ξ for the extent of reaction (Prigogine & Defay, p. 18; Prigogine, pp. 4–7; Guggenheim, p. 37.62), and to the use of the partial derivative ∂G/∂ξ (in place of the widely used "ΔG", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of dG on chemical reactions (or other processes). If there is just one reaction If we introduce the stoichiometric coefficient for the i-th component in the reaction (negative for reactants), which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial derivative where we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Théophile de Donder in 1923.(De Donder; Progogine & Defay, p. 69; Guggenheim, pp. 37, 240) The minus sign ensures that in a spontaneous change, when the change in the Gibbs free energy of the process is negative, the chemical species have a positive affinity for each other. The differential of G takes on a simple form that displays its dependence on composition change If there are a number of chemical reactions going on simultaneously, as is usually the case, with a set of reaction coordinates { ξj }, avoiding the notion that the amounts of the components ( Ni ) can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while they are negative when chemical reactions proceed at a finite rate, producing entropy. This can be made even more explicit by introducing the reaction rates dξj/dt. For every physically independent process (Prigogine & Defay, p. 38; Prigogine, p. 24) This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot "know" whether temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (−T times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See Constraints below.) We now relax the requirement of a homogeneous "bulk" system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the equality for dG is now replaced by or Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other also does. The coupling may occasionally be rigid, but it is often flexible and variable. Solutions In solution chemistry and biochemistry, the Gibbs free energy decrease (∂G/∂ξ, in molar units, denoted cryptically by ΔG) is commonly used as a surrogate for (−T times) the global entropy produced by spontaneous chemical reactions in situations where no work is being done; or at least no "useful" work; i.e., other than perhaps ± P dV. The assertion that all spontaneous reactions have a negative ΔG is merely a restatement of the second law of thermodynamics, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When no useful work is being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions −F/T and −G/T, respectively. Non-equilibrium Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields. The non-equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager's relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures. Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment. The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples. System constraints In this regard, it is crucial to understand the role of walls and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogeneous, isotropic bulk systems which can deliver only PdV work to the outside world, but applies even to the most structured systems. There are complex systems with many chemical "reactions" going on at the same time, some of which are really only parts of the same, overall process. An independent process is one that could proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a "thought experiment" in chemical kinetics, but actual examples exist. A gas-phase reaction at constant temperature and pressure which results in an increase in the number of molecules will lead to an increase in volume. Inside a cylinder closed with a piston, it can proceed only by doing work on the piston. The extent variable for the reaction can increase only if the piston moves out, and conversely if the piston is pushed inward, the reaction is driven backwards. Similarly, a redox reaction might occur in an electrochemical cell with the passage of current through a wire connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to flow. The current might be dissipated as Joule heating, or it might in turn run an electrical device like a motor doing mechanical work. An automobile lead-acid battery can be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent process. Some, perhaps most, of the Gibbs free energy of reaction may be delivered as external work. The hydrolysis of ATP to ADP and phosphate can drive the force-times-distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in mitochondria and chloroplasts, which involves the transport of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a piston, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is flowing. There is usually a coupling coefficient, which may depend on relative rates, which determines what percentage of the driving free energy is turned into external work, or captured as "chemical work", a misnomer for the free energy of another chemical process. See also Thermodynamic databases for pure substances laws of thermodynamics References Further reading Library of Congress Catalog No. 60-5597 Library of Congress Catalog No. 67-29540 Library of Congress Catalog No. 67-20003 External links Chemical Thermodynamics - University of North Carolina Chemical energetics (Introduction to thermodynamics and the First Law) Thermodynamics of chemical equilibrium (Entropy, Second Law and free energy) Physical chemistry Branches of thermodynamics Chemical engineering thermodynamics
Chemical thermodynamics
[ "Physics", "Chemistry", "Engineering" ]
3,312
[ "Applied and interdisciplinary physics", "Chemical engineering", "Thermodynamics", "nan", "Chemical engineering thermodynamics", "Chemical thermodynamics", "Branches of thermodynamics", "Physical chemistry" ]
5,993
https://en.wikipedia.org/wiki/Chemical%20bond
A chemical bond is the association of atoms or ions to form molecules, crystals, and other structures. The bond may result from the electrostatic force between oppositely charged ions as in ionic bonds or through the sharing of electrons as in covalent bonds, or some combination of these effects. Chemical bonds are described as having different strengths: there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole–dipole interactions, the London dispersion force, and hydrogen bonding. Since opposite electric charges attract, the negatively charged electrons surrounding the nucleus and the positively charged protons within a nucleus attract each other. Electrons shared between two nuclei will be attracted to both of them. "Constructive quantum mechanical wavefunction interference" stabilizes the paired nuclei (see Theories of chemical bonding). Bonded nuclei maintain an optimal distance (the bond distance) balancing attractive and repulsive effects explained quantitatively by quantum theory. The atoms in molecules, crystals, metals and other forms of matter are held together by chemical bonds, which determine the structure and properties of matter. All bonds can be described by quantum theory, but, in practice, simplified rules and other theories allow chemists to predict the strength, directionality, and polarity of bonds. The octet rule and VSEPR theory are examples. More sophisticated theories are valence bond theory, which includes orbital hybridization and resonance, and molecular orbital theory which includes the linear combination of atomic orbitals and ligand field theory. Electrostatics are used to describe bond polarities and the effects they have on chemical substances. Overview of main types of chemical bonds A chemical bond is an attraction between atoms. This attraction may be seen as the result of different behaviors of the outermost or valence electrons of atoms. These behaviors merge into each other seamlessly in various circumstances, so that there is no clear line to be drawn between them. However it remains useful and customary to differentiate between different types of bond, which result in different properties of condensed matter. In the simplest view of a covalent bond, one or more electrons (often a pair of electrons) are drawn into the space between the two atomic nuclei. Energy is released by bond formation. This is not as a result of reduction in potential energy, because the attraction of the two electrons to the two protons is offset by the electron-electron and proton-proton repulsions. Instead, the release of energy (and hence stability of the bond) arises from the reduction in kinetic energy due to the electrons being in a more spatially distributed (i.e. longer de Broglie wavelength) orbital compared with each electron being confined closer to its respective nucleus. These bonds exist between two particular identifiable atoms and have a direction in space, allowing them to be shown as single connecting lines between atoms in drawings, or modeled as sticks between spheres in models. In a polar covalent bond, one or more electrons are unequally shared between two nuclei. Covalent bonds often result in the formation of small collections of better-connected atoms called molecules, which in solids and liquids are bound to other molecules by forces that are often much weaker than the covalent bonds that hold the molecules internally together. Such weak intermolecular bonds give organic molecular substances, such as waxes and oils, their soft bulk character, and their low melting points (in liquids, molecules must cease most structured or oriented contact with each other). When covalent bonds link long chains of atoms in large molecules, however (as in polymers such as nylon), or when covalent bonds extend in networks through solids that are not composed of discrete molecules (such as diamond or quartz or the silicate minerals in many types of rock) then the structures that result may be both strong and tough, at least in the direction oriented correctly with networks of covalent bonds. Also, the melting points of such covalent polymers and networks increase greatly. In a simplified view of an ionic bond, the bonding electron is not shared at all, but transferred. In this type of bond, the outer atomic orbital of one atom has a vacancy which allows the addition of one or more electrons. These newly added electrons potentially occupy a lower energy-state (effectively closer to more nuclear charge) than they experience in a different atom. Thus, one nucleus offers a more tightly bound position to an electron than does another nucleus, with the result that one atom may transfer an electron to the other. This transfer causes one atom to assume a net positive charge, and the other to assume a net negative charge. The bond then results from electrostatic attraction between the positive and negatively charged ions. Ionic bonds may be seen as extreme examples of polarization in covalent bonds. Often, such bonds have no particular orientation in space, since they result from equal electrostatic attraction of each ion to all ions around them. Ionic bonds are strong (and thus ionic substances require high temperatures to melt) but also brittle, since the forces between ions are short-range and do not easily bridge cracks and fractures. This type of bond gives rise to the physical characteristics of crystals of classic mineral salts, such as table salt. A less often mentioned type of bonding is metallic bonding. In this type of bonding, each atom in a metal donates one or more electrons to a "sea" of electrons that reside between many metal atoms. In this sea, each electron is free (by virtue of its wave nature) to be associated with a great many atoms at once. The bond results because the metal atoms become somewhat positively charged due to loss of their electrons while the electrons remain attracted to many atoms, without being part of any given atom. Metallic bonding may be seen as an extreme example of delocalization of electrons over a large system of covalent bonds, in which every atom participates. This type of bonding is often very strong (resulting in the tensile strength of metals). However, metallic bonding is more collective in nature than other types, and so they allow metal crystals to more easily deform, because they are composed of atoms attracted to each other, but not in any particularly-oriented ways. This results in the malleability of metals. The cloud of electrons in metallic bonding causes the characteristically good electrical and thermal conductivity of metals, and also their shiny lustre that reflects most frequencies of white light. History Early speculations about the nature of the chemical bond, from as early as the 12th century, supposed that certain types of chemical species were joined by a type of chemical affinity. In 1704, Sir Isaac Newton famously outlined his atomic bonding theory, in "Query 31" of his Opticks, whereby atoms attach to each other by some "force". Specifically, after acknowledging the various popular theories in vogue at the time, of how atoms were reasoned to attach to each other, i.e. "hooked atoms", "glued together by rest", or "stuck together by conspiring motions", Newton states that he would rather infer from their cohesion, that "particles attract one another by some force, which in immediate contact is exceedingly strong, at small distances performs the chemical operations, and reaches not far from the particles with any sensible effect." In 1819, on the heels of the invention of the voltaic pile, Jöns Jakob Berzelius developed a theory of chemical combination stressing the electronegative and electropositive characters of the combining atoms. By the mid 19th century, Edward Frankland, F.A. Kekulé, A.S. Couper, Alexander Butlerov, and Hermann Kolbe, building on the theory of radicals, developed the theory of valency, originally called "combining power", in which compounds were joined owing to an attraction of positive and negative poles. In 1904, Richard Abegg proposed his rule that the difference between the maximum and minimum valencies of an element is often eight. At this point, valency was still an empirical number based only on chemical properties. However the nature of the atom became clearer with Ernest Rutherford's 1911 discovery that of an atomic nucleus surrounded by electrons in which he quoted Nagaoka rejected Thomson's model on the grounds that opposite charges are impenetrable. In 1904, Nagaoka proposed an alternative planetary model of the atom in which a positively charged center is surrounded by a number of revolving electrons, in the manner of Saturn and its rings. Nagaoka's model made two predictions: a very massive atomic center (in analogy to a very massive planet) electrons revolving around the nucleus, bound by electrostatic forces (in analogy to the rings revolving around Saturn, bound by gravitational forces.) Rutherford mentions Nagaoka's model in his 1911 paper in which the atomic nucleus is proposed. At the 1911 Solvay Conference, in the discussion of what could regulate energy differences between atoms, Max Planck stated: "The intermediaries could be the electrons." These nuclear models suggested that electrons determine chemical behavior. Next came Niels Bohr's 1913 model of a nuclear atom with electron orbits. In 1916, chemist Gilbert N. Lewis developed the concept of electron-pair bonds, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond; in Lewis's own words, "An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively." Also in 1916, Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904). Niels Bohr also proposed a model of the chemical bond in 1913. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other. In 1927, the first mathematically complete quantum description of a simple chemical bond, i.e. that produced by one electron in the hydrogen molecular ion, H2+, was derived by the Danish physicist Øyvind Burrau. This work showed that the quantum approach to chemical bonds could be fundamentally and quantitatively correct, but the mathematical methods used could not be extended to molecules containing more than one electron. A more practical, albeit less quantitative, approach was put forward in the same year by Walter Heitler and Fritz London. The Heitler–London method forms the basis of what is now called valence bond theory. In 1929, the linear combination of atomic orbitals molecular orbital method (LCAO) approximation was introduced by Sir John Lennard-Jones, who also suggested methods to derive electronic structures of molecules of F2 (fluorine) and O2 (oxygen) molecules, from basic quantum principles. This molecular orbital theory represented a covalent bond as an orbital formed by combining the quantum mechanical Schrödinger atomic orbitals which had been hypothesized for electrons in single atoms. The equations for bonding electrons in multi-electron atoms could not be solved to mathematical perfection (i.e., analytically), but approximations for them still gave many good qualitative predictions and results. Most quantitative calculations in modern quantum chemistry use either valence bond or molecular orbital theory as a starting point, although a third approach, density functional theory, has become increasingly popular in recent years. In 1933, H. H. James and A. S. Coolidge carried out a calculation on the dihydrogen molecule that, unlike all previous calculation which used functions only of the distance of the electron from the atomic nucleus, used functions which also explicitly added the distance between the two electrons. With up to 13 adjustable parameters they obtained a result very close to the experimental result for the dissociation energy. Later extensions have used up to 54 parameters and gave excellent agreement with experiments. This calculation convinced the scientific community that quantum theory could give agreement with experiment. However this approach has none of the physical pictures of the valence bond and molecular orbital theories and is difficult to extend to larger molecules. Bonds in chemical formulas Because atoms and molecules are three-dimensional, it is difficult to use a single method to indicate orbitals and bonds. In molecular formulas the chemical bonds (binding orbitals) between atoms are indicated in different ways depending on the type of discussion. Sometimes, some details are neglected. For example, in organic chemistry one is sometimes concerned only with the functional group of the molecule. Thus, the molecular formula of ethanol may be written in conformational form, three-dimensional form, full two-dimensional form (indicating every bond with no three-dimensional directions), compressed two-dimensional form (CH3–CH2–OH), by separating the functional group from another part of the molecule (C2H5OH), or by its atomic constituents (C2H6O), according to what is discussed. Sometimes, even the non-bonding valence shell electrons (with the two-dimensional approximate directions) are marked, e.g. for elemental carbon .'C'. Some chemists may also mark the respective orbitals, e.g. the hypothetical ethene−4 anion (\/C=C/\ −4) indicating the possibility of bond formation. Strong chemical bonds Strong chemical bonds are the intramolecular forces that hold atoms together in molecules. A strong chemical bond is formed from the transfer or sharing of electrons between atomic centers and relies on the electrostatic attraction between the protons in nuclei and the electrons in the orbitals. The types of strong bond differ due to the difference in electronegativity of the constituent elements. Electronegativity is the tendency for an atom of a given chemical element to attract shared electrons when forming a chemical bond, where the higher the associated electronegativity then the more it attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, which characterizes a bond along the continuous scale from covalent to ionic bonding. A large difference in electronegativity leads to more polar (ionic) character in the bond. Ionic bond Ionic bonding is a type of electrostatic interaction between atoms that have a large electronegativity difference. There is no precise value that distinguishes ionic from covalent bonding, but an electronegativity difference of over 1.7 is likely to be ionic while a difference of less than 1.7 is likely to be covalent. Ionic bonding leads to separate positive and negative ions. Ionic charges are commonly between −3e to +3e. Ionic bonding commonly occurs in metal salts such as sodium chloride (table salt). A typical feature of ionic bonds is that the species form into ionic crystals, in which no ion is specifically paired with any single other ion in a specific directional bond. Rather, each species of ion is surrounded by ions of the opposite charge, and the spacing between it and each of the oppositely charged ions near it is the same for all surrounding atoms of the same type. It is thus no longer possible to associate an ion with any specific other single ionized atom near it. This is a situation unlike that in covalent crystals, where covalent bonds between specific atoms are still discernible from the shorter distances between them, as measured via such techniques as X-ray diffraction. Ionic crystals may contain a mixture of covalent and ionic species, as for example salts of complex acids such as sodium cyanide, NaCN. X-ray diffraction shows that in NaCN, for example, the bonds between sodium cations (Na+) and the cyanide anions (CN−) are ionic, with no sodium ion associated with any particular cyanide. However, the bonds between the carbon (C) and nitrogen (N) atoms in cyanide are of the covalent type, so that each carbon is strongly bound to just one nitrogen, to which it is physically much closer than it is to other carbons or nitrogens in a sodium cyanide crystal. When such crystals are melted into liquids, the ionic bonds are broken first because they are non-directional and allow the charged species to move freely. Similarly, when such salts dissolve into water, the ionic bonds are typically broken by the interaction with water but the covalent bonds continue to hold. For example, in solution, the cyanide ions, still bound together as single CN− ions, move independently through the solution, as do sodium ions, as Na+. In water, charged ions move apart because each of them are more strongly attracted to a number of water molecules than to each other. The attraction between ions and water molecules in such solutions is due to a type of weak dipole-dipole type chemical bond. In melted ionic compounds, the ions continue to be attracted to each other, but not in any ordered or crystalline way. Covalent bond Covalent bonding is a common type of bonding in which two or more atoms share valence electrons more or less equally. The simplest and most common type is a single bond in which two atoms share two electrons. Other types include the double bond, the triple bond, one- and three-electron bonds, the three-center two-electron bond and three-center four-electron bond. In non-polar covalent bonds, the electronegativity difference between the bonded atoms is small, typically 0 to 0.3. Bonds within most organic compounds are described as covalent. The figure shows methane (CH4), in which each hydrogen forms a covalent bond with the carbon. See sigma bonds and pi bonds for LCAO descriptions of such bonding. Molecules that are formed primarily from non-polar covalent bonds are often immiscible in water or other polar solvents, but much more soluble in non-polar solvents such as hexane. A polar covalent bond is a covalent bond with a significant ionic character. This means that the two shared electrons are closer to one of the atoms than the other, creating an imbalance of charge. Such bonds occur between two atoms with moderately different electronegativities and give rise to dipole–dipole interactions. The electronegativity difference between the two atoms in these bonds is 0.3 to 1.7. Single and multiple bonds A single bond between two atoms corresponds to the sharing of one pair of electrons. The Hydrogen (H) atom has one valence electron. Two Hydrogen atoms can then form a molecule, held together by the shared pair of electrons. Each H atom now has the noble gas electron configuration of helium (He). The pair of shared electrons forms a single covalent bond. The electron density of these two bonding electrons in the region between the two atoms increases from the density of two non-interacting H atoms. A double bond has two shared pairs of electrons, one in a sigma bond and one in a pi bond with electron density concentrated on two opposite sides of the internuclear axis. A triple bond consists of three shared electron pairs, forming one sigma and two pi bonds. An example is nitrogen. Quadruple and higher bonds are very rare and occur only between certain transition metal atoms. Coordinate covalent bond (dipolar bond) A coordinate covalent bond is a covalent bond in which the two shared bonding electrons are from the same one of the atoms involved in the bond. For example, boron trifluoride (BF3) and ammonia (NH3) form an adduct or coordination complex F3B←NH3 with a B–N bond in which a lone pair of electrons on N is shared with an empty atomic orbital on B. BF3 with an empty orbital is described as an electron pair acceptor or Lewis acid, while NH3 with a lone pair that can be shared is described as an electron-pair donor or Lewis base. The electrons are shared roughly equally between the atoms in contrast to ionic bonding. Such bonding is shown by an arrow pointing to the Lewis acid. (In the Figure, solid lines are bonds in the plane of the diagram, wedged bonds point towards the observer, and dashed bonds point away from the observer.) Transition metal complexes are generally bound by coordinate covalent bonds. For example, the ion Ag+ reacts as a Lewis acid with two molecules of the Lewis base NH3 to form the complex ion Ag(NH3)2+, which has two Ag←N coordinate covalent bonds. Metallic bonding In metallic bonding, bonding electrons are delocalized over a lattice of atoms. By contrast, in ionic compounds, the locations of the binding electrons and their charges are static. The free movement or delocalization of bonding electrons leads to classical metallic properties such as luster (surface light reflectivity), electrical and thermal conductivity, ductility, and high tensile strength. Intermolecular bonding There are several types of weak bonds that can be formed between two or more molecules which are not covalently bound. Intermolecular forces cause molecules to attract or repel each other. Often, these forces influence physical characteristics (such as the melting point) of a substance. Van der Waals forces are interactions between closed-shell molecules. They include both Coulombic interactions between partial charges in polar molecules, and Pauli repulsions between closed electrons shells. Keesom forces are the forces between the permanent dipoles of two polar molecules. London dispersion forces are the forces between induced dipoles of different molecules. There can also be an interaction between a permanent dipole in one molecule and an induced dipole in another molecule. Hydrogen bonds of the form A--H•••B occur when A and B are two highly electronegative atoms (usually N, O or F) such that A forms a highly polar covalent bond with H so that H has a partial positive charge, and B has a lone pair of electrons which is attracted to this partial positive charge and forms a hydrogen bond. Hydrogen bonds are responsible for the high boiling points of water and ammonia with respect to their heavier analogues. In some cases a similar halogen bond can be formed by a halogen atom located between two electronegative atoms on different molecules. At short distances, repulsive forces between atoms also become important. Theories of chemical bonding In the (unrealistic) limit of "pure" ionic bonding, electrons are perfectly localized on one of the two atoms in the bond. Such bonds can be understood by classical physics. The force between the atoms depends on isotropic continuum electrostatic potentials. The magnitude of the force is in simple proportion to the product of the two ionic charges according to Coulomb's law. Covalent bonds are better understood by valence bond (VB) theory or molecular orbital (MO) theory. The properties of the atoms involved can be understood using concepts such as oxidation number, formal charge, and electronegativity. The electron density within a bond is not assigned to individual atoms, but is instead delocalized between atoms. In valence bond theory, bonding is conceptualized as being built up from electron pairs that are localized and shared by two atoms via the overlap of atomic orbitals. The concepts of orbital hybridization and resonance augment this basic notion of the electron pair bond. In molecular orbital theory, bonding is viewed as being delocalized and apportioned in orbitals that extend throughout the molecule and are adapted to its symmetry properties, typically by considering linear combinations of atomic orbitals (LCAO). Valence bond theory is more chemically intuitive by being spatially localized, allowing attention to be focused on the parts of the molecule undergoing chemical change. In contrast, molecular orbitals are more "natural" from a quantum mechanical point of view, with orbital energies being physically significant and directly linked to experimental ionization energies from photoelectron spectroscopy. Consequently, valence bond theory and molecular orbital theory are often viewed as competing but complementary frameworks that offer different insights into chemical systems. As approaches for electronic structure theory, both MO and VB methods can give approximations to any desired level of accuracy, at least in principle. However, at lower levels, the approximations differ, and one approach may be better suited for computations involving a particular system or property than the other. Unlike the spherically symmetrical Coulombic forces in pure ionic bonds, covalent bonds are generally directed and anisotropic. These are often classified based on their symmetry with respect to a molecular plane as sigma bonds and pi bonds. In the general case, atoms form bonds that are intermediate between ionic and covalent, depending on the relative electronegativity of the atoms involved. Bonds of this type are known as polar covalent bonds. References External links W. Locke (1997). Introduction to Molecular Orbital Theory. Retrieved May 18, 2005. Carl R. Nave (2005). HyperPhysics. Retrieved May 18, 2005. Linus Pauling and the Nature of the Chemical Bond: A Documentary History. Retrieved February 29, 2008. Quantum chemistry
Chemical bond
[ "Physics", "Chemistry", "Materials_science" ]
5,203
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Condensed matter physics", " molecular", "nan", "Atomic", "Chemical bonding", " and optical physics" ]
6,011
https://en.wikipedia.org/wiki/Chomsky%20hierarchy
The Chomsky hierarchy in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a language's vocabulary (or alphabet) that are valid according to the language's syntax. The linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes (set inclusive). History The general idea of a hierarchy of grammars was first described by Noam Chomsky in "Three models for the description of language" during the formalization of transformational-generative grammar (TGG). Marcel-Paul Schützenberger also played a role in the development of the theory of formal languages; the paper "The algebraic theory of context free languages" describes the modern hierarchy, including context-free grammars. Independently, alongside linguists, mathematicians were developing models of computation (via automata). Parsing a sentence in a language is similar to computation, and the grammars described by Chomsky proved to both resemble and be equivalent in computational power to various machine models. The hierarchy The following table summarizes each of Chomsky's four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have. The classes are defined by the constraints on the productions rules. Note that the set of grammars corresponding to recursive languages is not a member of this hierarchy; these would be properly between Type-0 and Type-1. Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive and every recursive language is recursively enumerable. These are all proper inclusions, meaning that there exist recursively enumerable languages that are not context-sensitive, context-sensitive languages that are not context-free and context-free languages that are not regular. Regular (Type-3) grammars Type-3 grammars generate the regular languages. Such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal, in which case the grammar is right regular. Alternatively, all the rules can have their right-hand sides consist of a single terminal, possibly preceded by a single nonterminal (left regular). These generate the same languages. However, if left-regular rules and right-regular rules are combined, the language need no longer be regular. The rule is also allowed here if does not appear on the right side of any rule. These languages are exactly all languages that can be decided by a finite-state automaton. Additionally, this family of formal languages can be obtained by regular expressions. Regular languages are commonly used to define search patterns and the lexical structure of programming languages. For example, the regular language is generated by the Type-3 grammar with the productions being the following. Context-free (Type-2) grammars Type-2 grammars generate the context-free languages. These are defined by rules of the form with being a nonterminal and being a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton. Context-free languages—or rather its subset of deterministic context-free languages—are the theoretical basis for the phrase structure of most programming languages, though their syntax also includes context-sensitive name resolution due to declarations and scope. Often a subset of grammars is used to make parsing easier, such as by an LL parser. For example, the context-free language is generated by the Type-2 grammar with the productions being the following. The language is context-free but not regular (by the pumping lemma for regular languages). Context-sensitive (Type-1) grammars Type-1 grammars generate context-sensitive languages. These grammars have rules of the form with a nonterminal and , and strings of terminals and/or nonterminals. The strings and may be empty, but must be nonempty. The rule is allowed if does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton (a nondeterministic Turing machine whose tape is bounded by a constant times the length of the input.) For example, the context-sensitive language is generated by the Type-1 grammar with the productions being the following. The language is context-sensitive but not context-free (by the pumping lemma for context-free languages). A proof that this grammar generates is sketched in the article on Context-sensitive grammars. Recursively enumerable (Type-0) grammars Type-0 grammars include all formal grammars. There are no constraints on the productions rules. They generate exactly all languages that can be recognized by a Turing machine, thus any language that is possible to be generated can be generated by a Type-0 grammar. These languages are also known as the recursively enumerable or Turing-recognizable languages. Note that this is different from the recursive languages, which can be decided by an always-halting Turing machine. See also Chomsky normal form Citations References 1956 in computing Formal languages Generative linguistics Hierarchy, Chomsky
Chomsky hierarchy
[ "Mathematics" ]
1,154
[ "Formal languages", "Mathematical logic" ]
6,014
https://en.wikipedia.org/wiki/Cathode-ray%20tube
A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms on an oscilloscope, a frame of video on an analog television set (TV), digital raster graphics on a computer monitor, or other phenomena like radar targets. A CRT in a TV is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons. In CRT TVs and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and TVs the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes. The tube is a glass envelope which is heavy, fragile, and long from front screen face to rear end. Its interior must be close to a vacuum to prevent the emitted electrons from colliding with air molecules and scattering before they hit the tube's face. Thus, the interior is evacuated to less than a millionth of atmospheric pressure. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. This tube makes up most of the weight of CRT TVs and computer monitors. Since the early 2010s, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and thinner. Flat-panel displays can also be made in very large sizes whereas was about the largest size of a CRT. A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons. History Discoveries Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were travelling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the mass-to-charge ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which had already been named electrons by Irish physicist George Johnstone Stoney in 1891. The earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. The Braun tube became the foundation of 20th century TV. In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature, in which he described how "distant electric vision" could be achieved by using a cathode-ray tube (or "Braun" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society. The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode. Development In 1926, Kenjiro Takayanagi demonstrated a CRT TV receiver with a mechanical video camera that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display. In 1927, Philo Farnsworth created a TV prototype. The CRT was named in 1929 by inventor Vladimir K. Zworykin. He was subsequently hired by RCA, which was granted a trademark for the term "Kinescope", RCA's term for a CRT, in 1932; it voluntarily released the term to the public domain in 1950. In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of TV. The first commercially made electronic TV sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934. In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created. From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT. In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be mass produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well. The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 25 inches by 1974, 30 inches by 1980, 35 inches by 1985, and 43 inches by 1989. However, experimental 31 inch CRTs were made as far back as 1938. In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs. 1968 marked the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction. In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass. In 1990, the first CRT with HD resolution, the Sony KW-3600HD, was released to the market. It is considered to be "historical material" by Japan's national museum. The Sony KWP-5500HD, an HD CRT projection TV, was released in 1992. In the mid-1990s, some 160 million CRTs were made per year. In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays. The last large-scale manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time. In 2012, Samsung SDI and several other major companies were fined by the European Commission for price fixing of TV cathode-ray tubes. The same occurred in 2015 in the US and in Canada in 2018. Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units. Decline Beginning in the late 1990s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004, Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in some markets in 2005. Samsung SDI stopped CRT production in 2012. Despite being a mainstay of display technology for decades, CRT-based computer monitors and TVs are now obsolete. Demand for CRT screens dropped in the late 2000s. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets and vintage enthusiasts once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as advantages. Some industries still use CRTs because it is too much effort, downtime, or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates. They are also used in some military equipment for similar reasons. , at least one company manufactures new CRTs for these markets. A popular consumer usage of CRTs is for retrogaming. Some games are impossible to play without CRT display hardware. Light guns only work on CRTs because they depend on the progressive timing properties of CRTs. Another reason people use CRTs due to the natural blending of these displays. Some games designed for CRT displays exploit this, which allows them to look more aesthetically pleasing on these displays. Constructions Body The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope. The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties. The optical properties of the glass used on the screen affect color reproduction and purity in color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546 nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside. The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with fused necks, for Color CRTs, or as bulbs made up of a fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern. CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass. The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays, as it doesn't brown unlike glass containing lead. Another glass formulation uses 2–3% of lead on the screen. Alternatively zirconium can also be used on the screen in combination with barium, instead of lead. Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24–32 kV, while for monochrome it is usually 21 or 24.5 kV, limiting the size of monochrome CRTs to 21 inches, or ~1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating. The leaded glass in the funnels of CRTs may contain 21–25% of lead oxide (PbO), The neck may contain 30–40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s. Before this, CRTs used lead on the faceplate. Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown Pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation. The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is 5–10 nF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury. The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs. Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved. Size and weight The size of a CRT can be measured by the screen's entire area (or face diagonal) or alternatively by only its viewable area (or diagonal) that is coated by phosphor and surrounded by black edges. While the viewable area may be rectangular, the edges of the CRT may have a curvature (e.g. black stripe CRTs, first made by Toshiba in 1972) or the edges may be black and truly flat (e.g. Flatron CRTs), or the viewable area may follow the curvature of the edges of the CRT (with or without black edges or curved edges). Small CRTs below 3 inches were made for handheld TVs such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat. Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT and limits its practical size (see ). The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel can vary in thickness, to join the thin neck with the thick screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass. Anode The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback. For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs. The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT. The anode cap connection in modern CRTs must be able to handle up to 55–60kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge. The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40–49% of Nickel and 3–6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip. The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field. The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays. Electron gun The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called "winding", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons. Construction and method of operation The electron gun has an indirectly heated hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5–2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but does not touch it; the cathode has its own separate electrical connection. The cathode is a material coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it. There are several short circuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering. The cathode is a layer of barium oxide which is coated on a piece of nickel for electrical and mechanical support. The barium oxide must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation is done when forming the vacuum (described in ). After activation, the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800–1000°C, at which point it starts shedding electrons. Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%. The amount of electrons generated by the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic. A negative current is applied to the first (control) grid (G1) to converge the electrons from the hot cathode, creating an electron beam. G1 in practice is a Wehnelt cylinder. The brightness of the screen is not controlled by varying the anode voltage nor the electron beam current (they are never varied) despite them having an influence on image brightness, rather image brightness is controlled by varying the difference in voltage between the cathode and the G1 control grid. The second (screen) grid of the gun (G2) then accelerates the electrons towards the screen using several hundred DC volts. Then a third grid (G3) electrostatically focuses the electron beam before it is deflected and later accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun. However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (≈600–8000 V) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT. There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid. During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image. The electron beam may be affected by the Earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk. Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage. After the CRTs were manufactured, they were aged to allow cathode emission to stabilize. The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40–170 V per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30 MHz of bandwidth can usually provide 720p or 1080i resolution, while 20 MHz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus creating shades of colors which create the image line by line and this can also affect the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light. Gamma CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity). Deflection There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2 kV, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT. Magnetic deflection Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue. The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15–240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance. Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require ~24 volts while the horizontal deflection coils require ~120 volts to operate. The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen. Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness. Electrostatic deflection Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle. Burn-in Burn-in is when images are physically "burned" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a "ghost" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays. Evacuation The CRT's partial vacuum of to or less is evacuated or exhausted in a ~375–475 °C oven in a process called baking or bake-out. The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected ("sealed or tipped off") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems. Rebuilding CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the disassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013. Reactivation Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun manually. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short. Phosphors Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam. The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium in the form of Zinc beryllium silicate, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching. Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yttrium oxide for red and yttrium silicide for blue in beam index tubes, while examples of earlier phosphors are copper cadmium sulfide for red, SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors. The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kV. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape. Phosphor persistence Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates. Limitations and workarounds Blooming Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current. Doming Doming is a phenomenon found on some CRT TVs in which parts of the shadow mask become heated. In TVs that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns. During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening. Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast. Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles do not block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through. High voltage Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions. Size A practical limit on the size of a CRT is the weight of the thick glass needed to safely sustain its vacuum, since a CRT's exterior is exposed to the full atmospheric pressure, which for instance totals on a 27-inch (400 in2) screen. For example, the large 43-inch Sony PVM-4300 weighs , much heavier than 32-inch CRTs (up to ) and 19-inch CRTs (up to ). Much lighter flat panel TVs are only ~ for 32-inch and for 19-inch. Size is also limited by anode voltage, as it would require a higher dielectric strength to prevent arcing and the electrical losses and ozone generation it causes, without sacrificing image brightness. Shadow masks also become more difficult to make with increasing resolution and size. Limits imposed by deflection At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass increases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress. Comparison with other technologies LCD advantages over CRT: Lower bulk, power consumption and heat generation, higher refresh rates (up to 360 Hz) CRT advantages over LCD: Better color reproduction, no motion blur, multisyncing available in many monitors, no input lag OLED advantages over CRT: Lower bulk, similar color reproduction, higher contrast ratios, similar refresh rates (over 60 Hz, up to 120 Hz) except for computer monitors. On CRTs, refresh rate depends on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs. CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are often preferred for playing video games made in the early 2000s and prior in spite of their bulk, weight and heat generation, with some pieces of technology requiring a CRT to function due to not being built with the functionality of modern displays in mind. CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist. Types CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes were of higher resolution and when used in computer monitors sometimes had adjustable overscan, or sometimes underscan. Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes. Monochrome CRTs If the CRT is a black and white (B&W or monochrome) CRT, there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen. The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen. The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit. Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image. When a monochrome CRT is shut off, the screen itself retracts to a small, white dot in the center, along with the phosphors shutting down, shot by the electron gun; it sometimes takes a while for it to go away. Color CRTs Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs). Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). The triangular configuration is often called delta-gun, based on its relation to the shape of the Greek letter delta (Δ). The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor. A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually  inch behind the screen. Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen. The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba). The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum, deposited after the phosphor coating is applied, facing the electron gun. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag. Shadow mask The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at ~435 °C of the frit seal between the faceplate and the funnel of the CRT. Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape. Screen manufacture Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons striking the wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness. Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used. The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a "lighthouse" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. Such technology was sold by Toshiba under the Microfilter brand name. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size. After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65–88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and the edges of funnel of the CRT that mate with the screen, are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435–475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT. Convergence and purity in color CRTs Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes. Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of "static" and "dynamic" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color "shadows" or "ghosts" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen. The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one pair of rings has 2 poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary. On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. In this case the deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask are not spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen is not spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen. The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate. Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use "self-convergence" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of "nonuniform" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence. Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control. Alternatively, the guns can be aligned with one another (converged) using convergence rings placed right outside the neck; with one ring per gun. The rings can have north and south poles. There can be 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence. Magnetic shielding and degaussing If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of "color purity" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT. Color CRT displays in TV sets and computer monitors often have a built-in degaussing (demagnetizing) coil mounted around the perimeter of the CRT face. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the coil which fades to zero over a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect. Resolution Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern. Projection CRTs Projection CRTs were used in CRT projectors and CRT rear-projection TVs, and are usually small (being 7–9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2 mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933. Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT. Beam-index tube Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it did not use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that was not covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings. Flat CRTs Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP Displays) which is truly flat on the outside and inside, but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith in 1986, and used flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming. LG's Flatron technology is based on this technology developed by Zenith, now a subsidiary of LG. Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80, and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells. Radar CRTs Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The deflection yoke rotated, causing the beam to rotate in a circular fashion. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were "written" by the beam, until the beam re-struck the phosphor. Oscilloscope CRTs In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with TV and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. TVs use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15 kV. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles. Microchannel plate When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well. Graticules Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility. Image storage tubes These are found in analog phosphor storage oscilloscopes. These are distinct from digital storage oscilloscopes which rely on solid state digital memory to store the image. Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen. When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube. Vector monitors Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as Asteroids. They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits. Data storage tubes The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen. Cat's eye In some vacuum tube radio sets, a "Magic Eye" or "Tuning Eye" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment. Charactrons Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems. Nimo Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronic Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon. Flood-beam CRT Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011. Print-head CRT CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. Zeus – thin CRT display In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the Zeus display, which contained CRT-like functionality in a flat-panel display. The cathode of this display was mounted under the front of the display, and the electrons from the cathode would be directed to the back to the display where they would stay until extracted by electrodes near the front of the display, and directed to the front of the display which had phosphor dots. The devices were demonstrated but never marketed. Slimmer CRT Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A flat CRT has a depth. The depth of Superslim was and Ultraslim was . Health concerns Ionizing radiation CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered to be not harmful. The Food and Drug Administration regulations in are used to strictly limit, for instance, TV receivers to 0.5 milliroentgens per hour at a distance of from any external surface; since 2007, most CRTs have emissions that fall well below this limit. Note that the roentgen is an outdated unit and does not account for dose absorption. The conversion rate is about .877 roentgen per rem. Assuming that the viewer absorbed the entire dose (which is unlikely), and that they watched TV for 2 hours a day, a .5 milliroentgen hourly dose would increase the viewers yearly dose by 320 millirem. For comparison, the average background radiation in the United States is 310 millirem a year. Negative effects of chronic radiation are not generally noticeable until doses over 20,000 millirem. The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate "soft" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs. Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting "X-radiation in excess of desirable levels". It was later found that TV sets from all manufacturers were also emitting radiation. This caused TV industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid "prolonged exposure" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about "radioactive" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to "go into homes to test all of the nation's 15 million color sets and to install radiation devices in them". The FDA eventually began regulating radiation emissions from all electronic products in the US. Toxicity Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware). Flicker At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most TVs run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL TVs that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred. High-frequency audible noise 50 Hz/60 Hz CRTs used for TV operate with horizontal scanning frequencies of 15,750 and 15,734.27 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT TV. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads. This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz). Implosion If the glass wall is damaged, atmospheric pressure can implode the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in TVs and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid injury. Implosion protection Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a "cataract", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm2. Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT; the band cools afterwards, shrinking in size and putting the glass under compression, which strengthens the glass and reduces the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode. The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel. Alternatively the compression caused by the rim band may be used to cause any cracks in the screen to propagate laterally at a high speed so that they reach the funnel and fully penetrate it before they fully penetrate the screen. This is possible because the funnel has walls that are thinner than the screen. Fully penetrating the funnel first allows air to enter the CRT from a short distance behind the screen, and prevent an implosion by ensuring the screen is fully penetrated by the cracks and breaks only when the CRT already has air. Electric shock To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1–1.5 kV of anode voltage per inch. Security concerns Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general. Recycling Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment. As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and , both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage. Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System. In Europe, disposal of CRT TVs and monitors is covered by the WEEE Directive. Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7 grams of phosphor. The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire. Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting. A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled. See also Cathodoluminescence Crookes tube Scintillation (physics) Laser-powered phosphor display, similar to a CRT, replaces the electron beam with a laser beam Applying CRT in different display-purpose: Analog television Image displaying Comparison of CRT, LCD, plasma, and OLED displays Overscan Raster scan Scan line Historical aspects: Direct-view bistable storage tube Flat-panel display Geer tube History of display technology Image dissector LCD television, LED-backlit LCD, LED display Penetron Surface-conduction electron-emitter display Trinitron Safety and precautions: Monitor filter Photosensitive epilepsy TCO Certification References Selected patents : Zworykin Television System External links Consumer electronics Display technology Television technology Vacuum tube displays Audiovisual introductions in 1897 Telecommunications-related introductions in 1897 Articles containing video clips Legacy hardware
Cathode-ray tube
[ "Technology", "Engineering" ]
24,545
[ "Information and communications technology", "Electronic engineering", "Television technology", "Display technology" ]
6,019
https://en.wikipedia.org/wiki/Computational%20chemistry
Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena. Overview Computational chemistry differs from theoretical chemistry, which involves a mathematical description of chemistry. However, computational chemistry involves the usage of computer programs and additional mathematical skills in order to accurately model various chemical problems. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. Historically, computational chemistry has had two different aspects: Computational studies, used to find a starting point for a laboratory synthesis or to assist in understanding experimental data, such as the position and source of spectroscopic peaks. Computational studies, used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms not readily studied via experiments. These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms. History Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One significant advancement was marked by Clemens C. J. Roothaan's 1951 paper in the Reviews of Modern Physics. This paper focused largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals). For many years, it was the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer. In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO. In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger. One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980. Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems". Applications There are several fields within computational chemistry. The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied. Storing and searching for data on chemical entities (see chemical databases). Identifying correlations between chemical structures and properties (see quantitative structure–property relationship (QSPR) and quantitative structure–activity relationship (QSAR)). Computational approaches to help in the efficient synthesis of compounds. Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis). These fields can give rise to several applications as shown below. Catalysis Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties. Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions. Drug development Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs. Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them. Computational chemistry databases Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry. Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental. Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following. BindingDB: Contains experimental information about protein-small molecule interactions. RCSB: Stores publicly available 3D models of macromolecules (proteins, nucleic acids) and small molecules (drugs, inhibitors) ChEMBL: Contains data from research on drug development such as assay results. DrugBank: Data about mechanisms of drugs can be found here. Methods Ab initio method The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called ab initio methods. A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz. A common type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones. In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface. Computational thermochemistry A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods. Chemical dynamics After the electronic and nuclear variables are separated within the Born–Oppenheimer representation), the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are: the Chebyshev (real) polynomial, the multi-configuration time-dependent Hartree method (MCTDH), the semiclassical method and the split operator technique explained below. Split operator technique How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into two different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution. This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation. The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting. There are ways to reduce this error, which include taking an average of two split equations. Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy. Computational chemists spend much time making systems calculated with split operator technique more accurate while minimizing the computational cost. Calculating methods is a massive challenge for many chemists trying to simulate molecules or chemical environments. Density functional methods Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods. Semi-empirical methods Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe molecular mechanics. Molecular mechanics In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, i.e. the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules. Molecular dynamics Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion. Monte Carlo Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms. Quantum mechanics/molecular mechanics (QM/MM) QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes. Quantum Computational Chemistry Quantum computational chemistry aims to exploit quantum computing to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum computing methods to represent and process information, such as Hamiltonian operators. Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions. Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior. While these techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments may lead to significant progress towards achieving more precise and resource-efficient quantum chemistry simulations. Computational costs in chemistry algorithms The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems. This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains. In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately. Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems. Algorithmic complexity examples The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It is important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry. Molecular dynamics Algorithm Solves Newton's equations of motion for atoms and molecules. Complexity The standard pairwise interaction calculation in MD leads to an complexity for particles. This is because each particle interacts with every other particle, resulting in interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to or even by grouping distant particles and treating them as a single entity or using clever mathematical approximations. Quantum mechanics/molecular mechanics (QM/MM) Algorithm Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment. Complexity The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as , where is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved. Hartree-Fock method Algorithm Finds a single Fock state that minimizes the energy. Complexity NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as to depending on implementation, with being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism. Density functional theory Algorithm Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases. Complexity Traditional implementations of DFT typically scale as , mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements. Standard CCSD and CCSD(T) method Algorithm CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects. Complexity CCSD Scales as where is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation. CCSD(T) With the addition of perturbative triples, the complexity increases to . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations. Linear-scaling CCSD(T) method Algorithm An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems. Complexity Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy. Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems. For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations. Accuracy Computational chemistry is not an exact description of real-life chemistry, as the mathematical and physical models of nature can only provide an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). Software packages Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in: Biomolecular modelling programs: proteins, nucleic acid. Molecular mechanics programs. Quantum chemistry and solid state-physics software supporting several methods. Molecular design software Semi-empirical programs. Valence bond programs. Specialized journals on computational chemistry Annual Reports in Computational Chemistry Computational and Theoretical Chemistry Computational and Theoretical Polymer Science Computers & Chemical Engineering Journal of Chemical Information and Modeling Journal of Chemical Software Journal of Chemical Theory and Computation Journal of Cheminformatics Journal of Computational Chemistry Journal of Computer Aided Chemistry Journal of Computer Chemistry Japan Journal of Computer-aided Molecular Design Journal of Theoretical and Computational Chemistry Molecular Informatics Theoretical Chemistry Accounts External links NIST Computational Chemistry Comparison and Benchmark DataBase – Contains a database of thousands of computational and experimental results for hundreds of systems American Chemical Society Division of Computers in Chemistry – American Chemical Society Computers in Chemistry Division, resources for grants, awards, contacts and meetings. CSTB report Mathematical Research in Materials Science: Opportunities and Perspectives – CSTB Report 3.320 Atomistic Computer Modeling of Materials (SMA 5107) Free MIT Course Chem 4021/8021 Computational Chemistry Free University of Minnesota Course Technology Roadmap for Computational Chemistry Applications of molecular and materials modelling. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology CSTB Report MD and Computational Chemistry applications on GPUs Susi Lehtola, Antti J. Karttunen:"Free and open source software for computational chemistry education", First published: 23 March 2022, https://doi.org/10.1002/wcms.1610 (Open Access) CCL.NET: Computational Chemistry List, Ltd. See also References Computational fields of study Theoretical chemistry Physical chemistry Chemical physics Computational physics
Computational chemistry
[ "Physics", "Chemistry", "Technology" ]
5,910
[ "Computational fields of study", "Applied and interdisciplinary physics", "Computational physics", "Theoretical chemistry", "Computational chemistry", "Computing and society", "nan", "Physical chemistry", "Chemical physics" ]
6,111
https://en.wikipedia.org/wiki/Chemical%20vapor%20deposition
Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high-quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films. In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber. Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics. The term chemical vapour deposition was coined in 1960 by John M. Blocher, Jr. who intended to differentiate chemical from physical vapour deposition (PVD). Types CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated. Classified by operating conditions: Atmospheric pressure CVD (APCVD) – CVD at atmospheric pressure. Low-pressure CVD (LPCVD) – CVD at sub-atmospheric pressures. Many journal articles and commercial tools use the term reduced pressure CVD (RPCVD) especially for single wafer tools in place of LPCVD which dominates for multi-wafer furnace tube tools. Reduced pressures tend to reduce unwanted gas-phase reactions and improve film uniformity across the wafer. Ultrahigh vacuum CVD (UHVCVD) – CVD at very low pressure, typically below 10−6 Pa (≈ 10−8 torr). Note that in other fields, a lower division between high and ultra-high vacuum is common, often 10−7 Pa. Sub-atmospheric CVD (SACVD) – CVD at sub-atmospheric pressures. Uses tetraethyl orthosilicate (TEOS) and ozone to fill high aspect ratio Si structures with silicon dioxide (SiO2). Most modern CVD is either LPCVD or UHVCVD. Classified by physical characteristics of vapor: Aerosol assisted CVD (AACVD) – CVD in which the precursors are transported to the substrate by means of a liquid/gas aerosol, which can be generated ultrasonically. This technique is suitable for use with non-volatile precursors. Direct liquid injection CVD (DLICVD) – CVD in which the precursors are in liquid form (liquid or solid dissolved in a convenient solvent). Liquid solutions are injected in a vaporization chamber towards injectors (typically car injectors). The precursor vapors are then transported to the substrate as in classical CVD. This technique is suitable for use on liquid or solid precursors. High growth rates can be reached using this technique. Classified by type of substrate heating: Hot wall CVD – CVD in which the chamber is heated by an external power source and the substrate is heated by radiation from the heated chamber walls. Cold wall CVD – CVD in which only the substrate is directly heated either by induction or by passing current through the substrate itself or a heater in contact with the substrate. The chamber walls are at room temperature. Plasma methods (see also Plasma processing): Microwave plasma-assisted CVD (MPCVD) Plasma-enhanced CVD (PECVD) – CVD that utilizes plasma to enhance chemical reaction rates of the precursors. PECVD processing allows deposition at lower temperatures, which is often critical in the manufacture of semiconductors. The lower temperatures also allow for the deposition of organic coatings, such as plasma polymers, that have been used for nanoparticle surface functionalization. Remote plasma-enhanced CVD (RPECVD) – Similar to PECVD except that the wafer substrate is not directly in the plasma discharge region. Removing the wafer from the plasma region allows processing temperatures down to room temperature. Low-energy plasma-enhanced chemical vapor deposition (LEPECVD) - CVD employing a high density, low energy plasma to obtain epitaxial deposition of semiconductor materials at high rates and low temperatures. Atomic-layer CVD (ALCVD) – Deposits successive layers of different substances to produce layered, crystalline films. See Atomic layer epitaxy. Combustion chemical vapor deposition (CCVD) – Combustion Chemical Vapor Deposition or flame pyrolysis is an open-atmosphere, flame-based technique for depositing high-quality thin films and nanomaterials. Hot filament CVD (HFCVD) – also known as catalytic CVD (Cat-CVD) or more commonly, initiated CVD, this process uses a hot filament to chemically decompose the source gases. The filament temperature and substrate temperature thus are independently controlled, allowing colder temperatures for better absorption rates at the substrate and higher temperatures necessary for decomposition of precursors to free radicals at the filament. Hybrid physical-chemical vapor deposition (HPCVD) – This process involves both chemical decomposition of precursor gas and vaporization of a solid source. Metalorganic chemical vapor deposition (MOCVD) – This CVD process is based on metalorganic precursors. Rapid thermal CVD (RTCVD) – This CVD process uses heating lamps or other methods to rapidly heat the wafer substrate. Heating only the substrate rather than the gas or chamber walls helps reduce unwanted gas-phase reactions that can lead to particle formation. Vapor-phase epitaxy (VPE) Photo-initiated CVD (PICVD) – This process uses UV light to stimulate chemical reactions. It is similar to plasma processing, given that plasmas are strong emitters of UV radiation. Under certain conditions, PICVD can be operated at or near atmospheric pressure. Laser chemical vapor deposition (LCVD) - This CVD process uses lasers to heat spots or lines on a substrate in semiconductor applications. In MEMS and in fiber production the lasers are used rapidly to break down the precursor gas—process temperature can exceed 2000 °C—to build up a solid structure in much the same way as laser sintering based 3-D printers build up solids from powders. Uses CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates, the applications for these films are anticipated in gas sensing and low-κ dielectrics. CVD techniques are advantageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores. Commercially important materials prepared by CVD Polysilicon Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions: SiHCl3 → Si + Cl2 + HCl SiH4 → Si + 2 H2 This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it. Silicon dioxide Silicon dioxide (usually called simply "oxide" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows: SiH4 + O2 → SiO2 + 2 H2 SiCl2H2 + 2 N2O → SiO2 + 2 N2 + 2 HCl Si(OC2H5)4 → SiO2 + byproducts The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of low- temperature oxide (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing. Oxide may also be grown with impurities (alloying or "doping"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide ("P-glass") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen: 4 PH3 + 5 O2 → 2 P2O5 + 6 H2 Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing. Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine. Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems. Silicon nitride Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase: 3 SiH4 + 4 NH3 → Si3N4 + 12 H2 3 SiCl2H2 + 4 NH3 → Si3N4 + 6 HCl + 6 H2 Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (1016 Ω·cm and 10 MV/cm, respectively). Another two reactions may be used in plasma to deposit SiNH: 2 SiH4 + N2 → 2 SiNH + 3 H2 SiH4 + NH3 → SiNH + 3 H2 These films have much less tensile stress, but worse electrical properties (resistivity 106 to 1015 Ω·cm, and dielectric strength 1 to 5 MV/cm). Metals Tungsten CVD, used for forming conductive contacts, vias, and plugs on a semiconductor device, is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways: WF6 → W + 3 F2 WF6 + 3 H2 → W + 6 HF Other metals, notably aluminium and copper, can be deposited by CVD. , commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminium can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds. CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal M, the chloride deposition reaction is as follows: 2 MCl5 + 5 H2 → 2 M + 10 HCl whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows: M(CO)n → M + n CO the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide. Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation: 2 Nb(OC2H5)5 → Nb2O5 + 5 C2H5OC2H5 Graphene Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet. Carbon source The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with. Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved. Use of catalyst The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material. The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process. Physical conditions Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene. Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate. On the other hand, temperatures used range from 800 to 1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs. Carrier gas Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate. Chamber material Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions. Methods of analysis of results Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples. Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography. Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism. Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry. Graphene nanoribbon In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance. Diamond CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used. Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free-standing diamonds of varying sizes. With CVD diamond, growth areas of greater than fifteen centimeters (six inches) in diameter have been achieved, and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications. The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat-producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically. CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word "diamond" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single-crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more. Chalcogenides Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements. See also Apollo Diamond Bubbler cylinder Carbonyl metallurgy Electrostatic spray assisted vapour deposition Element Six Ion plating Metalorganic vapour phase epitaxy Virtual metrology Lisa McElwee-White List of metal-organic chemical vapour deposition precursors List of synthetic diamond manufacturers References Further reading Okada K. (2007). "Plasma-enhanced chemical vapor deposition of nanocrystalline diamond" Sci. Technol. Adv. Mater. 8, 624 free-download review Liu T., Raabe D. and Zaefferer S. (2008). "A 3D tomographic EBSD analysis of a CVD diamond thin film" Sci. Technol. Adv. Mater. 9 (2008) 035013 free-download Wild, Christoph (2008). "CVD Diamond Properties and Useful Formula" CVD Diamond Booklet PDF free-download Hess, Dennis W. (1988). Chemical vapor deposition of dielectric and metal films . Free-download from Electronic Materials and Processing: Proceedings of the First Electronic Materials and Processing Congress held in conjunction with the 1988 World Materials Congress Chicago, Illinois, USA, 24–30 September 1988, Edited by Prabjit Singh (Sponsored by the Electronic Materials and Processing Division of ASM International). Chemical processes Coatings Glass coating and surface modification Industrial processes Plasma processing Semiconductor device fabrication Synthetic diamond Thin film deposition Vacuum Forming processes
Chemical vapor deposition
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
5,125
[ "Glass chemistry", "Thin film deposition", "Microtechnology", "Coatings", "Thin films", "Vacuum", "Chemical processes", "Semiconductor device fabrication", "nan", "Chemical process engineering", "Chemical vapor deposition", "Planes (geometry)", "Solid state engineering", "Glass coating and...
6,115
https://en.wikipedia.org/wiki/P%20versus%20NP%20problem
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved. Here, "quickly" means an algorithm that solves the task and runs in polynomial time (as opposed to, say, exponential time) exists, meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be verified in polynomial time is "NP", standing for "nondeterministic polynomial time". An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. Example Consider the following yes/no problem: given an incomplete Sudoku grid of size , is there at least one legal solution where every row, column, and square contains the integers 1 through ? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.) History The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973). Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated. Context The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other). In this theory, the class P consists of all decision problems (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes: Is P equal to NP? Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era." NP-completeness To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP. NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time. For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known. From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine M guaranteed to halt in polynomial time, does a polynomial-size input that M will accept exist? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists. The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem". Harder problems Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games. The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all. It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems. Problems in NP not known to be in P or NP-complete In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes. Does P mean "easy"? All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common assumption in complexity theory; but there are caveats. First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), and where h is the number of vertices in H. On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms. Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms. Reasons to believe P ≠ NP or P = NP Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH. It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience. On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made: DLIN vs NLIN When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN. It is known that DLIN ≠ NLIN. Consequences of solution One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well. P = NP A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields. It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them. A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including: Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet. Symmetric ciphers such as AES or 3DES, used for the encryption of communications data. Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT. These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP. There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology. These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics: Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says: Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle. Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof: P ≠ NP A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place. P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds. Results about difficulty of proof Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required. As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP: These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results. These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms. Logical characterizations The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity. Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P. Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH). Polynomial-time algorithms No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP: // Algorithm that accepts the NP-complete language SUBSET-SUM. // // this is a polynomial-time algorithm if and only if P = NP. // // "Polynomial-time" means it returns "yes" in polynomial time when // the answer should be "yes", and runs forever when it is "no". // // Input: S = a finite set of integers // Output: "yes" if any subset of S adds up to 0. // Runs forever with no output otherwise. // Note: "Program number M" is the program obtained by // writing the integer M in binary, then // considering that string of bits to be a // program. Every possible program can be // generated this way, though most do nothing // because of syntax errors. FOR K = 1...∞ FOR M = 1...K Run program number M for K steps with input S IF the program outputs a list of distinct integers AND the integers are all in S AND the integers sum to 0 THEN OUTPUT "yes" and HALT This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm). This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first. Formal definitions P and NP A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning, where and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies two conditions: M halts on all inputs w and there exists such that , where O refers to the big O notation and NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of certificate and verifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier": Let L be a language over a finite alphabet, Σ. L ∈ NP if, and only if, there exists a binary relation and a positive integer k such that the following two conditions are satisfied: For all , such that (x, y) ∈ R and ; and the language over is decidable by a deterministic Turing machine in polynomial time. A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L. Not all verifiers must be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time. Example Let Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations). COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test. NP-completeness There are many equivalent ways of describing NP-completeness. Let L be a language over a finite alphabet Σ. L is NP-complete if, and only if, the following two conditions are satisfied: L ∈ NP; and any L''' in NP is polynomial-time-reducible to L (written as ), where if, and only if, the following two conditions are satisfied: There exists f : Σ* → Σ* such that for all w in Σ* we have: ; and there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w. Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete. Claimed solutions While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted. Popular culture The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem. In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension". In the second episode of season 2 of Elementary, "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP. Similar problems R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete. A similar problem exists in the theory of algebraic complexity: VP vs. VNP'' problem. This problem has not been solved yet. See also Game complexity List of unsolved problems in mathematics Unique games conjecture Unsolved problems in computer science Notes References Sources Further reading Online drafts External links Aviad Rubinstein's Hardness of Approximation Between P and NP, winner of the ACM's 2017 Doctoral Dissertation Award. 1956 in computing Computer-related introductions in 1956 Conjectures Mathematical optimization Millennium Prize Problems Structural complexity theory Unsolved problems in computer science Unsolved problems in mathematics
P versus NP problem
[ "Mathematics" ]
6,469
[ "Mathematical analysis", "Unsolved problems in mathematics", "Unsolved problems in computer science", "Conjectures", "Millennium Prize Problems", "Mathematical optimization", "Mathematical problems" ]
6,122
https://en.wikipedia.org/wiki/Continuous%20function
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is . Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity. Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology. A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity. As an example, the function denoting the height of a growing flower at time would be considered continuous. In contrast, the function denoting the amount of money in a bank account at time would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn. History A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of as follows: an infinitely small increment of the independent variable x always produces an infinitely small change of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854. Real functions Definition A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below. Continuity of real functions is usually defined in terms of limits. A function with variable is continuous at the real number , if the limit of as tends to , is equal to There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain. A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval (the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere. A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function is continuous on its whole domain, which is the closed interval Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function and the tangent function When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous. A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions and are discontinuous at , and remain discontinuous whichever value is chosen for defining them at . A point where a function is discontinuous is called a discontinuity. Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above. Let be a function defined on a subset of the set of real numbers. This subset is the domain of . Some possible choices include : i.e., is the whole set of real numbers. or, for and real numbers, : is a closed interval, or : is an open interval. In the case of the domain being defined as an open interval, and do not belong to , and the values of and do not matter for continuity on . Definition in terms of limits of functions The function is continuous at some point of its domain if the limit of as x approaches c through the domain of f, exists and is equal to In mathematical notation, this is written as In detail this means three conditions: first, has to be defined at (guaranteed by the requirement that is in the domain of ). Second, the limit of that equation has to exist. Third, the value of this limit must equal (Here, we have assumed that the domain of f does not have any isolated points.) Definition in terms of neighborhoods A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood there is a neighborhood in its domain such that whenever As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous. Definition in terms of limits of sequences One can instead require that for any sequence of points in the domain which converges to c, the corresponding sequence converges to In mathematical notation, Weierstrass and Jordan definitions (epsilon–delta) of continuous functions Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function as above and an element of the domain , is said to be continuous at the point when the following holds: For any positive real number however small, there exists some positive real number such that for all in the domain of with the value of satisfies Alternatively written, continuity of at means that for every there exists a such that for all : More intuitively, we can say that if we want to get all the values to stay in some small neighborhood around we need to choose a small enough neighborhood for the values around If we can do that no matter how small the neighborhood is, then is continuous at In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology. Weierstrass had required that the interval be entirely within the domain , but Jordan removed that restriction. Definition in terms of control of the remainder In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity. A function is called a control function if C is non-decreasing A function is C-continuous at if there exists such a neighbourhood that A function is continuous in if it is C-continuous for some control function C. This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions a function is if it is for some For example, the Lipschitz and Hölder continuous functions of exponent below are defined by the set of control functions respectively Definition using oscillation Continuity can also be defined in terms of oscillation: a function f is continuous at a point if and only if its oscillation at that point is zero; in symbols, A benefit of this definition is that it discontinuity: the oscillation gives how the function is discontinuous at a point. This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than (hence a set) – and gives a rapid proof of one direction of the Lebesgue integrability condition. The oscillation is equivalent to the definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given there is no that satisfies the definition, then the oscillation is at least and conversely if for every there is a desired the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space. Definition using the hyperreals Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows. (see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity. Construction of continuous functions Checking the continuity of a given function can be simplified by checking one of the above defining properties for the building blocks of the given function. It is straightforward to show that the sum of two functions, continuous on some domain, is also continuous on this domain. Given then the (defined by for all ) is continuous in The same holds for the , (defined by for all ) is continuous in Combining the above preservations of continuity and the continuity of constant functions and of the identity function one arrives at the continuity of all polynomial functions such as (pictured on the right). In the same way, it can be shown that the (defined by for all such that ) is continuous in This implies that, excluding the roots of the (defined by for all , such that ) is also continuous on . For example, the function (pictured) is defined for all real numbers and is continuous at every such point. Thus, it is a continuous function. The question of continuity at does not arise since is not in the domain of There is no continuous function that agrees with for all Since the function sine is continuous on all reals, the sinc function is defined and continuous for all real However, unlike the previous example, G be extended to a continuous function on real numbers, by the value to be 1, which is the limit of when x approaches 0, i.e., Thus, by setting the sinc-function becomes a continuous function on all real numbers. The term is used in such cases when (re)defining values of a function to coincide with the appropriate limits make a function continuous at specific points. A more involved construction of continuous functions is the function composition. Given two continuous functions their composition, denoted as and defined by is continuous. This construction allows stating, for example, that is continuous for all Examples of discontinuous functions An example of a discontinuous function is the Heaviside step function , defined by Pick for instance . Then there is no around , i.e. no open interval with that will force all the values to be within the of , i.e. within . Intuitively, we can think of this type of discontinuity as a sudden jump in function values. Similarly, the signum or sign function is discontinuous at but continuous everywhere else. Yet another example: the function is continuous everywhere apart from . Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function, is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers, is nowhere continuous. Properties A useful lemma Let be a function that is continuous at a point and be a value such Then throughout some neighbourhood of Proof: By the definition of continuity, take , then there exists such that Suppose there is a point in the neighbourhood for which then we have the contradiction Intermediate value theorem The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states: If the real-valued function f is continuous on the closed interval and k is some number between and then there is some number such that For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m. As a consequence, if f is continuous on and and differ in sign, then, at some point must equal zero. Extreme value theorem The extreme value theorem states that if a function f is defined on a closed interval (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists with for all The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval (or any set that is not both closed and bounded), as, for example, the continuous function defined on the open interval (0,1), does not attain a maximum, being unbounded above. Relation to differentiability and integrability Every differentiable function is continuous, as can be shown. The converse does not hold: for example, the absolute value function is everywhere continuous. However, it is not differentiable at (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable. The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted More generally, the set of functions (from an open interval (or open subset of ) to the reals) such that f is times differentiable and such that the -th derivative of f is continuous is denoted See differentiability class. In the field of computer graphics, properties related (but not identical) to are sometimes called (continuity of position), (continuity of tangency), and (continuity of curvature); see Smoothness of curves and surfaces. Every continuous function is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows. Pointwise and uniform limits Given a sequence of functions such that the limit exists for all , the resulting function is referred to as the pointwise limit of the sequence of functions The pointwise limit function need not be continuous, even if all functions are continuous, as the animation at the right shows. However, f is continuous if all functions are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous. Directional Continuity Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number however small, there exists some number such that for all x in the domain with the value of will satisfy This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with yields the notion of functions. A function is continuous if and only if it is both right-continuous and left-continuous. Semicontinuity A function f is if, roughly, any jumps that might occur only go down, but not up. That is, for any there exists some number such that for all x in the domain with the value of satisfies The reverse condition is . Continuous functions between metric spaces The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set equipped with a function (called metric) that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces and and a function then is continuous at the point (with respect to the given metrics) if for any positive real number there exists a positive real number such that all satisfying will also satisfy As in the case of real functions above, this is equivalent to the condition that for every sequence in with limit we have The latter condition can be weakened as follows: is continuous at the point if and only if for every convergent sequence in with limit , the sequence is a Cauchy sequence, and is in the domain of . The set of points at which a function between metric spaces is continuous is a set – this follows from the definition of continuity. This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator between normed vector spaces and (which are vector spaces equipped with a compatible norm, denoted ) is continuous if and only if it is bounded, that is, there is a constant such that for all Uniform, Hölder and Lipschitz continuity The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way depends on and c in the definition above. Intuitively, a function f as above is uniformly continuous if the does not depend on the point c. More precisely, it is required that for every real number there exists such that for every with we have that Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces. A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all the inequality holds. Any Hölder continuous function is uniformly continuous. The particular case is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality holds for any The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations. Continuous functions between topological spaces Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology). A function between two topological spaces X and Y is continuous if for every open set the inverse image is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology ), but the continuity of f depends on the topologies used on X and Y. This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X. An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous. Continuity at a point The translation in the language of neighborhoods of the -definition of continuity leads to the following definition of the continuity at a point: This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images. Also, as every set that contains a neighborhood is also a neighborhood, and is the largest subset of such that this definition may be simplified into: As an open set is a set that is a neighborhood of all its points, a function is continuous at every point of if and only if it is a continuous function. If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous. Given a map is continuous at if and only if whenever is a filter on that converges to in which is expressed by writing then necessarily in If denotes the neighborhood filter at then is continuous at if and only if in Moreover, this happens if and only if the prefilter is a filter base for the neighborhood filter of in Alternative definitions Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function. Sequences and nets In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function is sequentially continuous if whenever a sequence in converges to a limit the sequence converges to Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions. For instance, consider the case of real-valued functions of one real variable: Proof. Assume that is continuous at (in the sense of continuity). Let be a sequence converging at (such a sequence always exists, for example, ); since is continuous at For any such we can find a natural number such that for all since converges at ; combining this with we obtain Assume on the contrary that is sequentially continuous and proceed by contradiction: suppose is not continuous at then we can take and call the corresponding point : in this way we have defined a sequence such that by construction but , which contradicts the hypothesis of sequential continuity. Closure operator and interior operator definitions In terms of the interior operator, a function between topological spaces is continuous if and only if for every subset In terms of the closure operator, is continuous if and only if for every subset That is to say, given any element that belongs to the closure of a subset necessarily belongs to the closure of in If we declare that a point is a subset if then this terminology allows for a plain English description of continuity: is continuous if and only if for every subset maps points that are close to to points that are close to Similarly, is continuous at a fixed given point if and only if whenever is close to a subset then is close to Instead of specifying topological spaces by their open subsets, any topology on can alternatively be determined by a closure operator or by an interior operator. Specifically, the map that sends a subset of a topological space to its topological closure satisfies the Kuratowski closure axioms. Conversely, for any closure operator there exists a unique topology on (specifically, ) such that for every subset is equal to the topological closure of in If the sets and are each associated with closure operators (both denoted by ) then a map is continuous if and only if for every subset Similarly, the map that sends a subset of to its topological interior defines an interior operator. Conversely, any interior operator induces a unique topology on (specifically, ) such that for every is equal to the topological interior of in If the sets and are each associated with interior operators (both denoted by ) then a map is continuous if and only if for every subset Filters and prefilters Continuity can also be characterized in terms of filters. A function is continuous if and only if whenever a filter on converges in to a point then the prefilter converges in to This characterization remains true if the word "filter" is replaced by "prefilter." Properties If and are continuous, then so is the composition If is continuous and X is compact, then f(X) is compact. X is connected, then f(X) is connected. X is path-connected, then f(X) is path-connected. X is Lindelöf, then f(X) is Lindelöf. X is separable, then f(X) is separable. The possible topologies on a fixed set X are partially ordered: a topology is said to be coarser than another topology (notation: ) if every open subset with respect to is also open with respect to Then, the identity map is continuous if and only if (see also comparison of topologies). More generally, a continuous function stays continuous if the topology is replaced by a coarser topology and/or is replaced by a finer topology. Homeomorphisms Symmetric to the concept of a continuous map is an open map, for which of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function need not be continuous. A bijective continuous function with a continuous inverse function is called a . If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism. Defining topologies via continuous functions Given a function where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f. Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X. A topology on a set S is uniquely determined by the class of all continuous functions into all topological spaces X. Dually, a similar idea can be applied to maps Related notions If is a continuous function from some subset of a topological space then a of to is any continuous function such that for every which is a condition that often written as In words, it is any continuous function that restricts to on This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If is not continuous, then it could not possibly have a continuous extension. If is a Hausdorff space and is a dense subset of then a continuous extension of to if one exists, will be unique. The Blumberg theorem states that if is an arbitrary function then there exists a dense subset of such that the restriction is continuous; in other words, every function can be restricted to some dense subset on which it is continuous. Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function between particular types of partially ordered sets and is continuous if for each directed subset of we have Here is the supremum with respect to the orderings in and respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology. In category theory, a functor between two categories is called if it commutes with small limits. That is to say, for any small (that is, indexed by a set as opposed to a class) diagram of objects in . A is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains. In measure theory, a function defined on a Lebesgue measurable set is called approximately continuous at a point if the approximate limit of at exists and equals . This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere. See also Continuity (mathematics) Absolute continuity Approximate continuity Dini continuity Equicontinuity Geometric continuity Parametric continuity Classification of discontinuities Coarse function Continuous function (set theory) Continuous stochastic process Normal function Open and closed maps Piecewise Symmetrically continuous function Direction-preserving function - an analog of a continuous function in discrete spaces. References Bibliography Calculus Types of functions
Continuous function
[ "Mathematics" ]
6,612
[ "Functions and mappings", "Calculus", "Theory of continuous functions", "Mathematical objects", "Topology", "Mathematical relations", "Types of functions" ]
6,246
https://en.wikipedia.org/wiki/Covalent%20bond
A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding. Covalent bonding also includes many kinds of interactions, including σ-bonding, π-bonding, metal-to-metal bonding, agostic interactions, bent bonds, three-center two-electron bonds and three-center four-electron bonds. The term covalent bond dates from 1939. The prefix co- means jointly, associated in action, partnered to a lesser degree, etc.; thus a "co-valent bond", in essence, means that the atoms share "valence", such as is discussed in valence bond theory. In the molecule , the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails the sharing of electrons over more than two atoms is said to be delocalized. History The term covalence in regard to bonding was first used in 1919 by Irving Langmuir in a Journal of the American Chemical Society article entitled "The Arrangement of Electrons in Atoms and Molecules". Langmuir wrote that "we shall denote by the term covalence the number of pairs of electrons that a given atom shares with its neighbors." The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms (and in 1926 he also coined the term "photon" for the smallest unit of radiant energy). He introduced the Lewis notation or electron dot notation or Lewis dot structure, in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond-forming electron pairs represented as solid lines. Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) – its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the n = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the n = 1 shell, which can hold only two. While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms. Types of covalent bonds Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds. Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out, resulting in a non-polar molecule. Covalent structures There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but generally, there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO2, CO2, and CH4. In molecular structures, there are weak forces of attraction. Such covalent substances are low-boiling-temperature liquids (such as ethanol), and low-melting-temperature solids (such as iodine and solid CO2). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3-dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures. One- and three-electron bonds Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the dihydrogen cation, . One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects. The simplest example of three-electron bonding can be found in the helium dimer cation, . It is considered a "half bond" because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three-electron bonds. Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities. Resonance There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule and its resulting experimentally-determined properties, hence a superposition of structures is needed. The same two atoms in such molecules can be bonded differently in different Lewis structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N–O interaction is = . Aromaticity In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4n + 2 (where n is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons (n = 1, 4n + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene. In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behavior of aromatic ring bonds, which otherwise are equivalent. Hypervalence Certain molecules such as xenon difluoride and sulfur hexafluoride have higher coordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and resonance of sigma bonds in valence bond theory. Electron deficiency In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in boron hydrides such as diborane (B2H6), which are often described as electron deficient because there are not enough valence electrons to form localized (2-centre 2-electron) bonds joining all the atoms. However, the more modern description using 3c–2e bonds does provide enough bonding orbitals to connect all the atoms so that the molecules can instead be classified as electron-precise. Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (the nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated. Quantum mechanical description After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states. Comparison of VB and MO theories The two theories represent two ways to build up the electron configuration of the molecule. For valence bond theory, the atomic hybrid orbitals are filled with electrons first to produce a fully bonded valence configuration, followed by performing a linear combination of contributing structures (resonance) if there are several of them. In contrast, for molecular orbital theory, a linear combination of atomic orbitals is performed first, followed by filling of the resulting molecular orbitals with electrons. The two approaches are regarded as complementary, and each provides its own insights into the problem of chemical bonding. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. As molecular orbital theory builds the molecular wavefunction out of delocalized orbitals, it is more suited for the calculation of ionization energies and the understanding of spectral absorption bands. At the qualitative level, both theories contain incorrect predictions. Simple (Heitler–London) valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple (Hartree–Fock) molecular orbital theory incorrectly predicts dissociation into a mixture of atoms and ions. On the other hand, simple molecular orbital theory correctly predicts Hückel's rule of aromaticity, while simple valence bond theory incorrectly predicts that cyclobutadiene has larger resonance energy than benzene. Although the wavefunctions generated by both theories at the qualitative level do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction. This is done by combining the valence bond covalent function with the functions describing all possible ionic structures or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach overestimates the weight of the ionic structures while the simple valence bond approach neglects them. This can also be described as saying that the simple molecular orbital approach neglects electron correlation while the simple valence bond approach overestimates it. Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. Molecular orbitals are orthogonal, which significantly increases the feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals. Covalency from atomic contribution to the electronic density of states Evaluation of bond covalency is dependent on the basis set for approximate quantum-chemical methods such as COOP (crystal orbital overlap population), COHP (Crystal orbital Hamilton population), and BCOOP (Balanced crystal orbital overlap population). To overcome this issue, an alternative formulation of the bond covalency can be provided in this way. The mass center of an atomic orbital with quantum numbers for atom A is defined as where is the contribution of the atomic orbital of the atom A to the total electronic density of states of the solid where the outer sum runs over all atoms A of the unit cell. The energy window is chosen in such a way that it encompasses all of the relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along with the considered bond. The relative position of the mass center of levels of atom A with respect to the mass center of levels of atom B is given as where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is where, for simplicity, we may omit the dependence from the principal quantum number in the notation referring to In this formalism, the greater the value of the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent bond. The quantity is denoted as the covalency of the bond, which is specified in the same units of the energy . Analogous effect in nuclear systems An analogous effect to covalent binding is believed to occur in some nuclear systems, with the difference that the shared fermions are quarks rather than electrons. High energy proton-proton scattering cross-section indicates that quark interchange of either u or d quarks is the dominant process of the nuclear force at short distance. In particular, it dominates over the Yukawa interaction where a meson is exchanged. Therefore, covalent binding by quark interchange is expected to be the dominating mechanism of nuclear binding at small distance when the bound hadrons have covalence quarks in common. See also Bonding in solids Bond order Coordinate covalent bond, also known as a dipolar bond or a dative covalent bond Covalent bond classification (or LXZ notation) Covalent radius Disulfide bond Hybridization Hydrogen bond Ionic bond Linear combination of atomic orbitals Metallic bonding Noncovalent bonding Resonance (chemistry) References Sources External links Covalent Bonds and Molecular Structure Structure and Bonding in Chemistry—Covalent Bonds Chemical bonding
Covalent bond
[ "Physics", "Chemistry", "Materials_science" ]
3,287
[ "Chemical bonding", "Condensed matter physics", "nan" ]
6,247
https://en.wikipedia.org/wiki/Condensation%20polymer
In polymer chemistry, condensation polymers are any kind of polymers whose process of polymerization involves a condensation reaction (i.e. a small molecule, such as water or methanol, is produced as a byproduct). Natural proteins as well as some common plastics such as nylon and PETE are formed in this way. Condensation polymers are formed by polycondensation, when the polymer is formed by condensation reactions between species of all degrees of polymerization, or by condensative chain polymerization, when the polymer is formed by sequential addition of monomers to an active site in a chain reaction. The main alternative forms of polymerization are chain polymerization and polyaddition, both of which give addition polymers. Condensation polymerization is a form of step-growth polymerization. Linear polymers are produced from bifunctional monomers, i.e. compounds with two reactive end-groups. Common condensation polymers include polyesters, polyamides such as nylon, polyacetals, and proteins. Polyamides One important class of condensation polymers are polyamides. They arise from the reaction of carboxylic acid and an amine. Examples include nylons and proteins. When prepared from amino-carboxylic acids, e.g. amino acids, the stoichiometry of the polymerization includes co-formation of water: n H2N-X-CO2H → [HN-X-C(O)]n + (n-1) H2O When prepared from diamines and dicarboxylic acids, e.g. the production of nylon 66, the polymerization produces two molecules of water per repeat unit: n H2N-X-NH2 + n HO2C-Y-CO2H → [HN-X-NHC(O)-Y-C(O)]n + (2n-1) H2O Polyesters Another important class of condensation polymers are polyesters. They arise from the reaction of a carboxylic acid and an alcohol. An example is polyethyleneterephthalate, the common plastic PETE (recycling #1 in the USA): n HO-X-OH + n HO2C-Y-CO2H → [O-X-O2C-Y-C(O)]n + (2n-1) H2O Safety and environmental considerations Condensation polymers tend to be more biodegradable than addition polymers. The peptide or ester bonds between monomers can be hydrolysed, especially in the presence of catalysts or bacterial enzymes. See also Biopolymer Epoxy resins Polyamide Polyester References External links Polymers (and condensation polymers) - Virtual Text of Organic Chemistry, William Reusch Polymer chemistry Polymerization reactions
Condensation polymer
[ "Chemistry", "Materials_science", "Engineering" ]
603
[ "Polymerization reactions", "Polymer chemistry", "Materials science" ]
6,313
https://en.wikipedia.org/wiki/Classical%20element
The classical elements typically refer to earth, water, air, fire, and (later) aether which were proposed to explain the nature and complexity of all matter in terms of simpler substances. Ancient cultures in Greece, Angola, Tibet, India, and Mali had similar lists which sometimes referred, in local languages, to "air" as "wind", and to "aether" as "space". These different cultures and even individual philosophers had widely varying explanations concerning their attributes and how they related to observable phenomena as well as cosmology. Sometimes these theories overlapped with mythology and were personified in deities. Some of these interpretations included atomism (the idea of very small, indivisible portions of matter), but other interpretations considered the elements to be divisible into infinitely small pieces without changing their nature. While the classification of the material world in ancient India, Hellenistic Egypt, and ancient Greece into air, earth, fire, and water was more philosophical, during the Middle Ages medieval scientists used practical, experimental observation to classify materials. In Europe, the ancient Greek concept, devised by Empedocles, evolved into the systematic classifications of Aristotle and Hippocrates. This evolved slightly into the medieval system, and eventually became the object of experimental verification in the 17th century, at the start of the Scientific Revolution. Modern science does not support the classical elements to classify types of substances. Atomic theory classifies atoms into more than a hundred chemical elements such as oxygen, iron, and mercury, which may form chemical compounds and mixtures. The modern categories roughly corresponding to the classical elements are the states of matter produced under different temperatures and pressures. Solid, liquid, gas, and plasma share many attributes with the corresponding classical elements of earth, water, air, and fire, but these states describe the similar behavior of different types of atoms at similar energy levels, not the characteristic behavior of certain atoms or substances. Hellenistic philosophy The ancient Greek concept of four basic elements, these being earth ( ), water ( ), air ( ), and fire ( ), dates from pre-Socratic times and persisted throughout the Middle Ages and into the Early modern period, deeply influencing European thought and culture. Pre-Socratic elements Water, air, or fire? The classical elements were first proposed independently by several early Pre-Socratic philosophers. Greek philosophers had debated which substance was the arche ("first principle"), or primordial element from which everything else was made. Thales () believed that water was this principle. Anaximander () argued that the primordial substance was not any of the known substances, but could be transformed into them, and they into each other. Anaximenes () favored air, and Heraclitus (fl. ) championed fire. Fire, earth, air, and water The Greek philosopher Empedocles () was the first to propose the four classical elements as a set: fire, earth, air, and water. He called them the four "roots" (, ). Empedocles also proved (at least to his own satisfaction) that air was a separate substance by observing that a bucket inverted in water did not become filled with water, a pocket of air remaining trapped inside. Fire, earth, air, and water have become the most popular set of classical elements in modern interpretations. One such version was provided by Robert Boyle in The Sceptical Chymist, which was published in 1661 in the form of a dialogue between five characters. Themistius, the Aristotelian of the party, says: Humorism (Hippocrates) According to Galen, these elements were used by Hippocrates () in describing the human body with an association with the four humours: yellow bile (fire), black bile (earth), blood (air), and phlegm (water). Medical care was primarily about helping the patient stay in or return to their own personal natural balanced state. Plato Plato (428/423 – 348/347 BC) seems to have been the first to use the term "element (, )" in reference to air, fire, earth, and water. The ancient Greek word for element, (from , "to line up") meant "smallest division (of a sun-dial), a syllable", as the composing unit of an alphabet it could denote a letter and the smallest unit from which a word is formed. Aristotle In On the Heavens (350 BC), Aristotle defines "element" in general: In his On Generation and Corruption, Aristotle related each of the four elements to two of the four sensible qualities: Fire is both hot and dry. Air is both hot and wet (for air is like vapor, ). Water is both cold and wet. Earth is both cold and dry. A classic diagram has one square inscribed in the other, with the corners of one being the classical elements, and the corners of the other being the properties. The opposite corner is the opposite of these properties, "hot – cold" and "dry – wet". Aether Aristotle added a fifth element, aether ( ), as the quintessence, reasoning that whereas fire, earth, air, and water were earthly and corruptible, since no changes had been perceived in the heavenly regions, the stars cannot be made out of any of the four elements but must be made of a different, unchangeable, heavenly substance. It had previously been believed by pre-Socratics such as Empedocles and Anaxagoras that aether, the name applied to the material of heavenly bodies, was a form of fire. Aristotle himself did not use the term aether for the fifth element, and strongly criticised the pre-Socratics for associating the term with fire. He preferred a number of other terms indicating eternal movement, thus emphasising the evidence for his discovery of a new element. These five elements have been associated since Plato's Timaeus with the five platonic solids. Neo-Platonism The Neoplatonic philosopher Proclus rejected Aristotle's theory relating the elements to the sensible qualities hot, cold, wet, and dry. He maintained that each of the elements has three properties. Fire is sharp (ὀξυτητα), subtle (λεπτομερειαν), and mobile (εὐκινησιαν) while its opposite, earth, is blunt (αμβλυτητα), dense (παχυμερειαν), and immobile (ακινησιαν); they are joined by the intermediate elements, air and water, in the following fashion: Hermeticism A text written in Egypt in Hellenistic or Roman times called the Kore Kosmou ("Virgin of the World") ascribed to Hermes Trismegistus (associated with the Egyptian god Thoth), names the four elements fire, water, air, and earth. As described in this book: Ancient Indian philosophy Hinduism The system of five elements are found in Vedas, especially Ayurveda, the pancha mahabhuta, or "five great elements", of Hinduism are: bhūmi or pṛthvī (earth), āpas or jala (water), agní or tejas (fire), vāyu, vyāna, or vāta (air or wind) ākāśa, vyom, or śūnya (space or zero) or (aether or void). They further suggest that all of creation, including the human body, is made of these five essential elements and that upon death, the human body dissolves into these five elements of nature, thereby balancing the cycle of nature. The five elements are associated with the five senses, and act as the gross medium for the experience of sensations. The basest element, earth, created using all the other elements, can be perceived by all five senses — (i) hearing, (ii) touch, (iii) sight, (iv) taste, and (v) smell. The next higher element, water, has no odor but can be heard, felt, seen and tasted. Next comes fire, which can be heard, felt and seen. Air can be heard and felt. "Akasha" (aether) is beyond the senses of smell, taste, sight, and touch; it being accessible to the sense of hearing alone. Buddhism Buddhism has had a variety of thought about the five elements and their existence and relevance, some of which continue to this day. In the Pali literature, the mahabhuta ("great elements") or catudhatu ("four elements") are earth, water, fire and air. In early Buddhism, the four elements are a basis for understanding suffering and for liberating oneself from suffering. The earliest Buddhist texts explain that the four primary material elements are solidity, fluidity, temperature, and mobility, characterized as earth, water, fire, and air, respectively. The Buddha's teaching regarding the four elements is to be understood as the base of all observation of real sensations rather than as a philosophy. The four properties are cohesion (water), solidity or inertia (earth), expansion or vibration (air) and heat or energy content (fire). He promulgated a categorization of mind and matter as composed of eight types of "kalapas" of which the four elements are primary and a secondary group of four are colour, smell, taste, and nutriment which are derivative from the four primaries. Thanissaro Bhikkhu (1997) renders an extract of Shakyamuni Buddha's from Pali into English thus: Tibetan Buddhist medical literature speaks of the (five elements) or "elemental properties": earth, water, fire, wind, and space. The concept was extensively used in traditional Tibetan medicine. Tibetan Buddhist theology, tantra traditions, and "astrological texts" also spoke of them making up the "environment, [human] bodies," and at the smallest or "subtlest" level of existence, parts of thought and the mind. Also at the subtlest level of existence, the elements exist as "pure natures represented by the five female buddhas", Ākāśadhātviśvarī, Buddhalocanā, Mamakī, Pāṇḍarāvasinī, and Samayatārā, and these pure natures "manifest as the physical properties of earth (solidity), water (fluidity), fire (heat and light), wind (movement and energy), and" the expanse of space. These natures exist as all "qualities" that are in the physical world and take forms in it. Ancient African philosophy Angola In traditional Bakongo religion, the five elements are incorporated into the Kongo cosmogram. This sacred symbol also depicts the physical world (Nseke), the spiritual world of the ancestors (Mpémba), the Kalûnga line that runs between the two worlds, the circular void that originally formed the two worlds (mbûngi), and the path of the sun. Each element correlates to a period in the life cycle, which the Bakongo people also equate to the four cardinal directions. According to their cosmology, all living things go through this cycle. Aether represents mbûngi, the circular void that begot the universe. Air (South) represents musoni, the period of conception that takes place during spring. Fire (East) represent kala, the period of birth that takes place during summer. Earth (North) represents tukula, the period of maturity that takes place during fall. Water (West) represents luvemba, the period of death that takes place during winter Mali In traditional Bambara spirituality, the Supreme God created four additional essences of himself during creation. Together, these five essences of the deity correlate with the five classical elements. Koni is the thought and void (aether). Bemba (also called Pemba) is the god of the sky and air. Nyale (also called Koroni Koundyé) is the goddess of fire. Faro is the androgynous god of water. Ndomadyiri is the god and master of the earth. Post-classical history Alchemy The elemental system used in medieval alchemy was developed primarily by the anonymous authors of the Arabic works attributed to Pseudo Apollonius of Tyana. This system consisted of the four classical elements of air, earth, fire, and water, in addition to a new theory called the sulphur-mercury theory of metals, which was based on two elements: sulphur, characterizing the principle of combustibility, "the stone which burns"; and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducible components of the universe and are of larger consideration within philosophical alchemy. The three metallic principles—sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity—became the tria prima of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left in smoke the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt). Japan Japanese traditions use a set of elements called the (godai, literally "five great"). These five are earth, water, fire, wind/air, and void. These came from Indian Vastu shastra philosophy and Buddhist beliefs; in addition, the classical Chinese elements (, wu xing) are also prominent in Japanese culture, especially to the influential Neo-Confucianists during the medieval Edo period. Earth represented rocks and stability. Water represented fluidity and adaptability. Fire represented life and energy. Wind represented movement and expansion. Void or Sky/Heaven represented spirit and creative energy. Medieval Aristotelian philosophy The Islamic philosophers al-Kindi, Avicenna and Fakhr al-Din al-Razi followed Aristotle in connecting the four elements with the four natures heat and cold (the active force), and dryness and moisture (the recipients). Medicine Wheel The medicine wheel symbol is a modern invention attributed to Native American peoples dating to approximately 1972, with the following descriptions and associations being a later addition. The associations with the classical elements are not grounded in traditional Indigenous teachings and the symbol has not been adopted by all Indigenous American nations. Earth (South) represents the youth cycle, summer, the Indigenous race, and cedar medicine. Fire (East) represents the birth cycle, spring, the Asian race, and tobacco medicine. Wind/Air (North) represents the elder cycle, winter, the European race, and sweetgrass medicine. Water (West) represents the adulthood cycle, autumn, the African race, and sage medicine. Modern history Chemical element The Aristotelian tradition and medieval alchemy eventually gave rise to modern chemistry, scientific theories and new taxonomies. By the time of Antoine Lavoisier, for example, a list of elements would no longer refer to classical elements. Some modern scientists see a parallel between the classical elements and the four states of matter: solid, liquid, gas and weakly ionized plasma. Modern science recognizes classes of elementary particles which have no substructure (or rather, particles that are not made of other particles) and composite particles having substructure (particles made of other particles). Western astrology Western astrology uses the four classical elements in connection with astrological charts and horoscopes. The twelve signs of the zodiac are divided into the four elements: Fire signs are Aries, Leo and Sagittarius, Earth signs are Taurus, Virgo and Capricorn, Air signs are Gemini, Libra and Aquarius, and Water signs are Cancer, Scorpio, and Pisces. Criticism The Dutch historian of science Eduard Jan Dijksterhuis writes that the theory of the classical elements "was bound to exercise a really harmful influence. As is now clear, Aristotle, by adopting this theory as the basis of his interpretation of nature and by never losing faith in it, took a course which promised few opportunities and many dangers for science." Bertrand Russell says that Aristotle's thinking became imbued with almost biblical authority in later centuries. So much so that "Ever since the beginning of the seventeenth century, almost every serious intellectual advance has had to begin with an attack on some Aristotelian doctrine". See also – Early Islamic alchemy Notes References Bibliography External links Section on 4 elements in Buddhism Natural philosophy History of astrology Technical factors of astrology Concepts in Chinese philosophy Theories in ancient Greek philosophy Indian philosophy Hindu cosmology Buddhist cosmology Taoist cosmology Esoteric cosmology
Classical element
[ "Astronomy" ]
3,543
[ "History of astrology", "History of astronomy" ]
6,316
https://en.wikipedia.org/wiki/Water%20%28classical%20element%29
Water is one of the classical elements in ancient Greek philosophy along with air, earth and fire, in the Asian Indian system Panchamahabhuta, and in the Chinese cosmological and physiological system Wu Xing. In contemporary esoteric traditions, it is commonly associated with the qualities of emotion and intuition. Greek and Roman tradition Water was one of many archai proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495 – c. 435 BC) selected four archai for his four roots: air, fire, water and earth. Empedocles roots became the four classical elements of Greek philosophy. Plato (427–347 BC) took over the four elements of Empedocles. In the Timaeus, his major cosmological dialogue, the Platonic solid associated with water is the icosahedron which is formed from twenty equilateral triangles. This makes water the element with the greatest number of sides, which Plato regarded as appropriate because water flows out of one's hand when picked up, as if it is made of tiny little balls. Plato's student Aristotle (384–322 BC) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the Universe to form the sublunary sphere. According to Aristotle, water is both cold and wet and occupies a place between air and earth among the elemental spheres. In ancient Greek medicine, each of the four humours became associated with an element. Phlegm was the humor identified with water, since both were cold and wet. Other things associated with water and phlegm in ancient and medieval medicine included the season of Winter, since it increased the qualities of cold and moisture, the phlegmatic temperament, the feminine and the western point of the compass. In alchemy, the chemical element of mercury was often associated with water and its alchemical symbol was a downward-pointing triangle. Indian tradition Ap () is the Vedic Sanskrit term for water, in Classical Sanskrit occurring only in the plural is not an element.v, (sometimes re-analysed as a thematic singular, ), whence Hindi . The term is from PIE hxap water. In Hindu philosophy, the term refers to water as an element, one of the Panchamahabhuta, or "five great elements". In Hinduism, it is also the name of the deva, a personification of water, (one of the Vasus in most later Puranic lists). The element water is also associated with Chandra or the moon and Shukra, who represent feelings, intuition and imagination. According to Jain tradition, water itself is inhabited by spiritual Jīvas called apakāya ekendriya. Ceremonial magic Water and the other Greek classical elements were incorporated into the Golden Dawn system. The elemental weapon of water is the cup. Each of the elements has several associated spiritual beings. The archangel of water is Gabriel, the angel is Taliahad, the ruler is Tharsis, the king is Nichsa and the water elementals are called Ondines. It is referred to the upper right point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community. Modern witchcraft Water is one of the five elements that appear in most Wiccan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn. See also Water Sea and river deity Notes External links Different versions of the classical elements Classical elements Water in culture Esoteric cosmology History of astrology Technical factors of astrology Concepts in ancient Greek metaphysics
Water (classical element)
[ "Astronomy" ]
787
[ "History of astrology", "History of astronomy" ]
6,329
https://en.wikipedia.org/wiki/Chromatography
In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the mobile phase, which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the stationary phase is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation. Chromatography may be preparative or analytical. The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive. Etymology and pronunciation Chromatography, pronounced , is derived from Greek χρῶμα chrōma, which means "color", and γράφειν gráphein, which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments. History The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term chromatography in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes. Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules. Terms Analyte – the substance to be separated during chromatography. It is also normally what is needed from the mixture. Analytical chromatography – the use of chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample. Bonded phase – a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing. Chromatogram – the visual output of the chromatograph. In the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to the concentration of the specific analyte separated. Chromatograph – an instrument that enables a sophisticated separation, e.g. gas chromatographic or liquid chromatographic separation. Chromatography – a physical method of separation that distributes components to separate between two phases, one stationary (stationary phase), the other (the mobile phase) moving in a definite direction. Eluent (sometimes spelled eluant) – the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase. Eluate – the mixture of solute (see Eluite) and solvent (see Eluent) exiting the column. Effluent – the stream flowing out of a chromatographic column. In practise, it is used synonymously with eluate, but the term more precisely refers to the stream independent of separation taking place. Eluite – a more precise term for solute or analyte. It is a sample component leaving the chromatographic column. Eluotropic series – a list of solvents ranked according to their eluting power. Immobilized phase – a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing. Mobile phase – the phase that moves in a definite direction. It may be a liquid (LC and capillary electrochromatography, CEC), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography, SFC). The mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column. In the case of HPLC the mobile phase consists of a non-polar solvent(s) such as hexane in normal phase or a polar solvent such as methanol in reverse phase chromatography and the sample being separated. The mobile phase moves through the chromatography column (the stationary phase) where the sample interacts with the stationary phase and is separated. Preparative chromatography – the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis. Retention time – the characteristic time it takes for a particular analyte to pass through the system (from the column inlet to the detector) under set conditions. See also: Kovats' retention index Sample – the matter analyzed in chromatography. It may consist of a single component or it may be a mixture of components. When the sample is treated in the course of an analysis, the phase or the phases containing the analytes of interest is/are referred to as the sample whereas everything out of interest separated from the sample before or in the course of the analysis is referred to as waste. Solute – the sample components in partition chromatography. Solvent – any substance capable of solubilizing another substance, and especially the liquid mobile phase in liquid chromatography. Stationary phase – the substance fixed in place for the chromatography procedure. Examples include the silica layer in thin-layer chromatography Detector – the instrument used for qualitative and quantitative detection of analytes after separation. Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g., cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g., non-polar derivative of C-18) is used. Techniques by chromatographic bed shape Column chromatography Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample. In 1978, W. Clark Still introduced a modified version of column chromatography called flash column chromatography (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage. In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells. Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein. Planar chromatography Planar chromatography is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance. Paper chromatography Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of chromatography paper. The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far. Thin-layer chromatography (TLC) Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity. Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step). Displacement chromatography The basic principle of displacement chromatography is: A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities. There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations. Techniques by physical state of mobile phase Gas chromatography Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine workhorses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns. Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research. Liquid chromatography Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography. In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 () as the stationary phase) is termed reversed phase liquid chromatography (RPLC). Supercritical fluid chromatography Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure. Techniques by separation mechanism Affinity chromatography Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained. Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties. However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity. Ion exchange chromatography Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC. Size-exclusion chromatography Size-exclusion chromatography (SEC) is also known as gel permeation chromatography (GPC) or gel filtration chromatography and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume). Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions. Expanded bed adsorption chromatographic separation An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed. Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage. Special techniques Reversed-phase chromatography Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate. Hydrophobic interaction chromatography Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution. In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money. If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins. Hydrodynamic chromatography Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column. HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important. HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device. Two-dimensional chromatography In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system. Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation. Simulated moving-bed chromatography The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely. In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed. True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well. Pyrolysis gas chromatography Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry. Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis. Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well. Fast protein liquid chromatography Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application. Countercurrent chromatography Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force. Hydrodynamic countercurrent chromatography (CCC) The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently. Centrifugal partition chromatography (CPC) In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients (KD) of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume. Periodic counter-current chromatography In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion. Chiral chromatography Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available. Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases nonracemic mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ). Aqueous normal-phase chromatography Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning. Applications Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals. See also Affinity chromatography Aqueous normal-phase chromatography Binding selectivity Chiral analysis Chromatofocusing Chromatography in blood processing Chromatography software Glowmatography Multicolumn countercurrent solvent gradient purification (MCSGP) Purnell equation Van Deemter equation References External links IUPAC Nomenclature for Chromatography Overlapping Peaks Program – Learning by Simulations Chromatography Videos – MIT OCW – Digital Lab Techniques Manual Chromatography Equations Calculators – MicroSolv Technology Corporation Chemical pathology Biological techniques and tools Russian inventions
Chromatography
[ "Chemistry", "Biology" ]
8,085
[ "Chromatography", "Separation processes", "nan", "Biochemistry", "Chemical pathology" ]
6,339
https://en.wikipedia.org/wiki/Cell%20biology
Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of the structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry. History Cells were first seen in 17th-century Europe with the invention of the compound microscope. In 1665, Robert Hooke referred to the building blocks of all living organisms as "cells" (published in Micrographia) after looking at a piece of cork and observing a structure reminiscent of a monastic cell; however, the cells were dead. They gave no indication to the actual overall components of a cell. A few years later, in 1674, Anton Van Leeuwenhoek was the first to analyze live cells in his examination of algae. Many years later, in 1831, Robert Brown discovered the nucleus. All of this preceded the cell theory which states that all living things are made up of cells and that cells are organisms' functional and structural units. This was ultimately concluded by plant scientist Matthias Schleiden and animal scientist Theodor Schwann in 1838, who viewed live cells in plant and animal tissue, respectively. 19 years later, Rudolf Virchow further contributed to the cell theory, adding that all cells come from the division of pre-existing cells. Viruses are not considered in cell biology – they lack the characteristics of a living cell and instead are studied in the microbiology subclass of virology. Techniques Cell biology research looks at different ways to culture and manipulate cells outside of a living body to further research in human anatomy and physiology, and to derive medications. The techniques by which cells are studied have evolved. Due to advancements in microscopy, techniques and technology have allowed scientists to hold a better understanding of the structure and function of cells. Many techniques commonly used to study cell biology are listed below: Cell culture: Utilizes rapidly growing cells on media which allows for a large amount of a specific cell type and an efficient way to study cells. Cell culture is one of the major tools used in cellular and molecular biology, providing excellent model systems for studying the normal physiology and biochemistry of cells (e.g., metabolic studies, aging), the effects of drugs and toxic compounds on the cells, and mutagenesis and carcinogenesis. It is also used in drug screening and development, and large scale manufacturing of biological compounds (e.g., vaccines, therapeutic proteins). Fluorescence microscopy: Fluorescent markers such as GFP, are used to label a specific component of the cell. Afterwards, a certain light wavelength is used to excite the fluorescent marker which can then be visualized. Phase-contrast microscopy: Uses the optical aspect of light to represent the solid, liquid, and gas-phase changes as brightness differences. Confocal microscopy: Combines fluorescence microscopy with imaging by focusing light and snap shooting instances to form a 3-D image. Transmission electron microscopy: Involves metal staining and the passing of electrons through the cells, which will be deflected upon interaction with metal. This ultimately forms an image of the components being studied. Cytometry: The cells are placed in the machine which uses a beam to scatter the cells based on different aspects and can therefore separate them based on size and content. Cells may also be tagged with GFP-fluorescence and can be separated that way as well. Cell fractionation: This process requires breaking up the cell using high temperature or sonification followed by centrifugation to separate the parts of the cell allowing for them to be studied separately. Cell types There are two fundamental classifications of cells: prokaryotic and eukaryotic. Prokaryotic cells are distinguished from eukaryotic cells by the absence of a cell nucleus or other membrane-bound organelle. Prokaryotic cells are much smaller than eukaryotic cells, making them the smallest form of life. Prokaryotic cells include Bacteria and Archaea, and lack an enclosed cell nucleus.  Eukaryotic cells are found in plants, animals, fungi, and protists. They range from 10 to 100 μm in diameter, and their DNA is contained within a membrane-bound nucleus. Eukaryotes are organisms containing eukaryotic cells. The four eukaryotic kingdoms are Animalia, Plantae, Fungi, and Protista. They both reproduce through binary fission. Bacteria, the most prominent type, have several different shapes, although most are spherical or rod-shaped. Bacteria can be classed as either gram-positive or gram-negative depending on the cell wall composition. Gram-positive bacteria have a thicker peptidoglycan layer than gram-negative bacteria. Bacterial structural features include a flagellum that helps the cell to move, ribosomes for the translation of RNA to protein, and a nucleoid that holds all the genetic material in a circular structure. There are many processes that occur in prokaryotic cells that allow them to survive. In prokaryotes, mRNA synthesis is initiated at a promoter sequence on the DNA template comprising two consensus sequences that recruit RNA polymerase. The prokaryotic polymerase consists of a core enzyme of four protein subunits and a σ protein that assists only with initiation. For instance, in a process termed conjugation, the fertility factor allows the bacteria to possess a pilus which allows it to transmit DNA to another bacteria which lacks the F factor, permitting the transmittance of resistance allowing it to survive in certain environments. Structure and function Structure of eukaryotic cells Eukaryotic cells are composed of the following organelles: Nucleus: The nucleus of the cell functions as the genome and genetic information storage for the cell, containing all the DNA organized in the form of chromosomes. It is surrounded by a nuclear envelope, which includes nuclear pores allowing for the transportation of proteins between the inside and outside of the nucleus. This is also the site for replication of DNA as well as transcription of DNA to RNA. Afterwards, the RNA is modified and transported out to the cytosol to be translated to protein. Nucleolus: This structure is within the nucleus, usually dense and spherical. It is the site of ribosomal RNA (rRNA) synthesis, which is needed for ribosomal assembly. Endoplasmic reticulum (ER): This functions to synthesize, store, and secrete proteins to the Golgi apparatus. Structurally, the endoplasmic reticulum is a network of membranes found throughout the cell and connected to the nucleus. The membranes are slightly different from cell to cell and a cell's function determines the size and structure of the ER. Mitochondria: Commonly known as the powerhouse of the cell is a double membrane bound cell organelle. This functions for the production of energy or ATP within the cell. Specifically, this is the place where the Krebs cycle or TCA cycle for the production of NADH and FADH occurs. Afterwards, these products are used within the electron transport chain (ETC) and oxidative phosphorylation for the final production of ATP. Golgi apparatus: This functions to further process, package, and secrete the proteins to their destination. The proteins contain a signal sequence that allows the Golgi apparatus to recognize and direct it to the correct place. Golgi apparatus also produce glycoproteins and glycolipids. Lysosome: The lysosome functions to degrade material brought in from the outside of the cell or old organelles. This contains many acid hydrolases, proteases, nucleases, and lipases, which break down the various molecules. Autophagy is the process of degradation through lysosomes which occurs when a vesicle buds off from the ER and engulfs the material, then, attaches and fuses with the lysosome to allow the material to be degraded. Ribosomes: Functions to translate RNA to protein. it serves as a site of protein synthesis. Cytoskeleton: Cytoskeleton is a structure that helps to maintain the shape and general organization of the cytoplasm. It anchors organelles within the cells and makes up the structure and stability of the cell. The cytoskeleton is composed of three principal types of protein filaments: actin filaments, intermediate filaments, and microtubules, which are held together and linked to subcellular organelles and the plasma membrane by a variety of accessory proteins. Cell membrane: The cell membrane can be described as a phospholipid bilayer and is also consisted of lipids and proteins. Because the inside of the bilayer is hydrophobic and in order for molecules to participate in reactions within the cell, they need to be able to cross this membrane layer to get into the cell via osmotic pressure, diffusion, concentration gradients, and membrane channels. Centrioles: Function to produce spindle fibers which are used to separate chromosomes during cell division. Eukaryotic cells may also be composed of the following molecular components: Chromatin: This makes up chromosomes and is a mixture of DNA with various proteins. Cilia: They help to propel substances and can also be used for sensory purposes. Cell metabolism Cell metabolism is necessary for the production of energy for the cell and therefore its survival and includes many pathways and also sustaining the main cell organelles such as the nucleus, the mitochondria, the cell membrane etc. For cellular respiration, once glucose is available, glycolysis occurs within the cytosol of the cell to produce pyruvate. Pyruvate undergoes decarboxylation using the multi-enzyme complex to form acetyl coA which can readily be used in the TCA cycle to produce NADH and FADH2. These products are involved in the electron transport chain to ultimately form a proton gradient across the inner mitochondrial membrane. This gradient can then drive the production of ATP and during oxidative phosphorylation. Metabolism in plant cells includes photosynthesis which is simply the exact opposite of respiration as it ultimately produces molecules of glucose. Cell signaling Cell signaling or cell communication is important for cell regulation and for cells to process information from the environment and respond accordingly. Signaling can occur through direct cell contact or endocrine, paracrine, and autocrine signaling. Direct cell-cell contact is when a receptor on a cell binds a molecule that is attached to the membrane of another cell. Endocrine signaling occurs through molecules secreted into the bloodstream. Paracrine signaling uses molecules diffusing between two cells to communicate. Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through: Ion channels: Can be of different types such as voltage or ligand gated ion channels. They allow for the outflow and inflow of molecules and ions. G-protein coupled receptor (GPCR): Is widely recognized to contain seven transmembrane domains. The ligand binds on the extracellular domain and once the ligand binds, this signals a guanine exchange factor to convert GDP to GTP and activate the G-α subunit. G-α can target other proteins such as adenyl cyclase or phospholipase C, which ultimately produce secondary messengers such as cAMP, Ip3, DAG, and calcium. These secondary messengers function to amplify signals and can target ion channels or other enzymes. One example for amplification of a signal is cAMP binding to and activating PKA by removing the regulatory subunits and releasing the catalytic subunit. The catalytic subunit has a nuclear localization sequence which prompts it to go into the nucleus and phosphorylate other proteins to either repress or activate gene activity. Receptor tyrosine kinases: Bind growth factors, further promoting the tyrosine on the intracellular portion of the protein to cross phosphorylate. The phosphorylated tyrosine becomes a landing pad for proteins containing an SH2 domain allowing for the activation of Ras and the involvement of the MAP kinase pathway. Growth and development Eukaryotic cell cycle Cells are the foundation of all organisms and are the fundamental units of life. The growth and development of cells are essential for the maintenance of the host and survival of the organism. For this process, the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, and cell death. The cell cycle is divided into four distinct phases: G1, S, G2, and M. The G phase – which is the cell growth phase – makes up approximately 95% of the cycle. The proliferation of cells is instigated by progenitors. All cells start out in an identical form and can essentially become any type of cells. Cell signaling such as induction can influence nearby cells to determinate the type of cell it will become. Moreover, this allows cells of the same type to aggregate and form tissues, then organs, and ultimately systems. The G1, G2, and S phase (DNA replication, damage and repair) are considered to be the interphase portion of the cycle, while the M phase (mitosis) is the cell division portion of the cycle. Mitosis is composed of many stages which include, prophase, metaphase, anaphase, telophase, and cytokinesis, respectively. The ultimate result of mitosis is the formation of two identical daughter cells. The cell cycle is regulated in cell cycle checkpoints, by a series of signaling factors and complexes such as cyclins, cyclin-dependent kinase, and p53. When the cell has completed its growth process and if it is found to be damaged or altered, it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it can cause to the organism's survival. Cell mortality, cell lineage immortality The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes, as well as in prokaryotes, provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination. Cell cycle phases The cell cycle is a four-stage process that a cell goes through as it develops and divides. It includes Gap 1 (G1), synthesis (S), Gap 2 (G2), and mitosis (M). The cell either restarts the cycle from G1 or leaves the cycle through G0 after completing the cycle. The cell can progress from G0 through terminal differentiation. Finally, the interphase refers to the phases of the cell cycle that occur between one mitosis and the next, and includes G1, S, and G2. Thus, the phases are: G1 phase: the cell grows in size and its contents are replicated. S phase: the cell replicates each of the 46 chromosomes. G2 phase: in preparation for cell division, new organelles and proteins form. M phase: cytokinesis occurs, resulting in two identical daughter cells. G0 phase: the two cells enter a resting stage where they do their job without actively preparing to divide. Pathology The scientific branch that studies and diagnoses diseases on the cellular level is called cytopathology. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to the pathology branch of histopathology, which studies whole tissues. Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening test used to detect cervical cancer, and precancerous cervical lesions that may lead to cervical cancer. Cell cycle checkpoints and DNA damage repair system The cell cycle is composed of a number of well-ordered, consecutive stages that result in cellular division. The fact that cells do not begin the next stage until the last one is finished, is a significant element of cell cycle regulation. Cell cycle checkpoints are characteristics that constitute an excellent monitoring strategy for accurate cell cycle and divisions. Cdks, associated cyclin counterparts, protein kinases, and phosphatases regulate cell growth and division from one stage to another. The cell cycle is controlled by the temporal activation of Cdks, which is governed by cyclin partner interaction, phosphorylation by particular protein kinases, and de-phosphorylation by Cdc25 family phosphatases. In response to DNA damage, a cell's DNA repair reaction is a cascade of signaling pathways that leads to checkpoint engagement, regulates, the repairing mechanism in DNA, cell cycle alterations, and apoptosis. Numerous biochemical structures, as well as processes that detect damage in DNA, are ATM and ATR, which induce the DNA repair checkpoints The cell cycle is a sequence of activities in which cell organelles are duplicated and subsequently separated into daughter cells with precision. There are major events that happen during a cell cycle. The processes that happen in the cell cycle include cell development, replication and segregation of chromosomes.  The cell cycle checkpoints are surveillance systems that keep track of the cell cycle's integrity, accuracy, and chronology. Each checkpoint serves as an alternative cell cycle endpoint, wherein the cell's parameters are examined and only when desirable characteristics are fulfilled does the cell cycle advance through the distinct steps. The cell cycle's goal is to precisely copy each organism's DNA and afterwards equally split the cell and its components between the two new cells. Four main stages occur in the eukaryotes. In G1, the cell is usually active and continues to grow rapidly, while in G2, the cell growth continues while protein molecules become ready for separation. These are not dormant times; they are when cells gain mass, integrate growth factor receptors, establish a replicated genome, and prepare for chromosome segregation. DNA replication is restricted to a separate Synthesis in eukaryotes, which is also known as the S-phase. During mitosis, which is also known as the M-phase, the segregation of the chromosomes occur. DNA, like every other molecule, is capable of undergoing a wide range of chemical reactions. Modifications in DNA's sequence, on the other hand, have a considerably bigger impact than modifications in other cellular constituents like RNAs or proteins because DNA acts as a permanent copy of the cell genome. When erroneous nucleotides are incorporated during DNA replication, mutations can occur. The majority of DNA damage is fixed by removing the defective bases and then re-synthesizing the excised area. On the other hand, some DNA lesions can be mended by reversing the damage, which may be a more effective method of coping with common types of DNA damage. Only a few forms of DNA damage are mended in this fashion, including pyrimidine dimers caused by ultraviolet (UV) light changed by the insertion of methyl or ethyl groups at the purine ring's O6 position. Mitochondrial membrane dynamics Mitochondria are commonly referred to as the cell's "powerhouses" because of their capacity to effectively produce ATP which is essential to maintain cellular homeostasis and metabolism. Moreover, researchers have gained a better knowledge of mitochondria's significance in cell biology because of the discovery of cell signaling pathways by mitochondria which are crucial platforms for cell function regulation such as apoptosis. Its physiological adaptability is strongly linked to the cell mitochondrial channel's ongoing reconfiguration through a range of mechanisms known as mitochondrial membrane dynamics, including endomembrane fusion and fragmentation (separation) and ultrastructural membrane remodeling. As a result, mitochondrial dynamics regulate and frequently choreograph not only metabolic but also complicated cell signaling processes such as cell pluripotent stem cells, proliferation, maturation, aging, and mortality. Mutually, post-translational alterations of mitochondrial apparatus and the development of transmembrane contact sites among mitochondria and other structures, which both have the potential to link signals from diverse routes that affect mitochondrial membrane dynamics substantially, Mitochondria are wrapped by two membranes: an inner mitochondrial membrane (IMM) and an outer mitochondrial membrane (OMM), each with a distinctive function and structure, which parallels their dual role as cellular powerhouses and signaling organelles. The inner mitochondrial membrane divides the mitochondrial lumen into two parts: the inner border membrane, which runs parallel to the OMM, and the cristae, which are deeply twisted, multinucleated invaginations that give room for surface area enlargement and house the mitochondrial respiration apparatus. The outer mitochondrial membrane, on the other hand, is soft and permeable. It, therefore, acts as a foundation for cell signaling pathways to congregate, be deciphered, and be transported into mitochondria. Furthermore, the OMM connects to other cellular organelles, such as the endoplasmic reticulum (ER), lysosomes, endosomes, and the plasma membrane. Mitochondria play a wide range of roles in cell biology, which is reflected in their morphological diversity. Ever since the beginning of the mitochondrial study, it has been well documented that mitochondria can have a variety of forms, with both their general and ultra-structural morphology varying greatly among cells, during the cell cycle, and in response to metabolic or cellular cues. Mitochondria can exist as independent organelles or as part of larger systems; they can also be unequally distributed in the cytosol through regulated mitochondrial transport and placement to meet the cell's localized energy requirements. Mitochondrial dynamics refers to the adaptive and variable aspect of mitochondria, including their shape and subcellular distribution. Autophagy Autophagy is a self-degradative mechanism that regulates energy sources during growth and reaction to dietary stress. Autophagy also cleans up after itself, clearing aggregated proteins, cleaning damaged structures including mitochondria and endoplasmic reticulum and eradicating intracellular infections. Additionally, autophagy has antiviral and antibacterial roles within the cell, and it is involved at the beginning of distinctive and adaptive immune responses to viral and bacterial contamination. Some viruses include virulence proteins that prevent autophagy, while others utilize autophagy elements for intracellular development or cellular splitting. Macro autophagy, micro autophagy, and chaperon-mediated autophagy are the three basic types of autophagy. When macro autophagy is triggered, an exclusion membrane incorporates a section of the cytoplasm, generating the autophagosome, a distinctive double-membraned organelle. The autophagosome then joins the lysosome to create an autolysosome, with lysosomal enzymes degrading the components. In micro autophagy, the lysosome or vacuole engulfs a piece of the cytoplasm by invaginating or protruding the lysosomal membrane to enclose the cytosol or organelles. The chaperone-mediated autophagy (CMA) protein quality assurance by digesting oxidized and altered proteins under stressful circumstances and supplying amino acids through protein denaturation. Autophagy is the primary intrinsic degradative system for peptides, fats, carbohydrates, and other cellular structures. In both physiologic and stressful situations, this cellular progression is vital for upholding the correct cellular balance. Autophagy instability leads to a variety of illness symptoms, including inflammation, biochemical disturbances, aging, and neurodegenerative, due to its involvement in controlling cell integrity. The modification of the autophagy-lysosomal networks is a typical hallmark of many neurological and muscular illnesses. As a result, autophagy has been identified as a potential strategy for the prevention and treatment of various disorders. Many of these disorders are prevented or improved by consuming polyphenol in the meal. As a result, natural compounds with the ability to modify the autophagy mechanism are seen as a potential therapeutic option. The creation of the double membrane (phagophore), which would be known as nucleation, is the first step in macro-autophagy. The phagophore approach indicates dysregulated polypeptides or defective organelles that come from the cell membrane, Golgi apparatus, endoplasmic reticulum, and mitochondria. With the conclusion of the autophagocyte, the phagophore's enlargement comes to an end. The auto-phagosome combines with the lysosomal vesicles to formulate an auto-lysosome that degrades the encapsulated substances, referred to as phagocytosis. Notable cell biologists Jean Baptiste Carnoy Peter Agre Günter Blobel Robert Brown Geoffrey M. Cooper Christian de Duve Henri Dutrochet Robert Hooke H. Robert Horvitz Marc Kirschner Anton van Leeuwenhoek Ira Mellman Marta Miączyńska Peter D. Mitchell Rudolf Virchow Paul Nurse George Emil Palade Keith R. Porter Ray Rappaport Michael Swann Roger Tsien Edmund Beecher Wilson Kenneth R. Miller Matthias Jakob Schleiden Theodor Schwann Yoshinori Ohsumi Jan Evangelista Purkyně See also The American Society for Cell Biology Cell biophysics Cell disruption Cell physiology Cellular adaptation Cellular microbiology Institute of Molecular and Cell Biology (disambiguation) Meiomitosis Organoid Outline of cell biology Notes References electronic-book electronic- Cell and Molecular Biology by Karp 5th Ed., External links Aging Cell "Francis Harry Compton Crick (1916–2004)" by A. Andrei at the Embryo Project Encyclopedia "Biology Resource By Professor Lin."
Cell biology
[ "Chemistry", "Biology" ]
5,683
[ "Biochemistry", "Cell biology", "Molecular biology" ]
6,355
https://en.wikipedia.org/wiki/Chloroplast
A chloroplast () is a type of organelle known as a plastid that conducts photosynthesis mostly in plant and algal cells. Chloroplasts have a high concentration of chlorophyll pigments which capture the energy from sunlight and convert it to chemical energy and release oxygen. The chemical energy created is then used to make sugar and other organic molecules from carbon dioxide in a process called the Calvin cycle. Chloroplasts carry out a number of other functions, including fatty acid synthesis, amino acid synthesis, and the immune response in plants. The number of chloroplasts per cell varies from one, in some unicellular algae, up to 100 in plants like Arabidopsis and wheat. Chloroplasts are highly dynamic—they circulate and are moved around within cells. Their behavior is strongly influenced by environmental factors like light color and intensity. Chloroplasts cannot be made anew by the plant cell and must be inherited by each daughter cell during cell division, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell. Chloroplasts evolved from an ancient cyanobacterium that was engulfed by an early eukaryotic cell. Because of their endosymbiotic origins, chloroplasts, like mitochondria, contain their own DNA separate from the cell nucleus. With one exception (the amoeboid Paulinella chromatophora), all chloroplasts can be traced back to a single endosymbiotic event. Despite this, chloroplasts can be found in extremely diverse organisms that are not directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events. Discovery and etymology The first definitive description of a chloroplast (Chlorophyllkörnen, "grain of chlorophyll") was given by Hugo von Mohl in 1837 as discrete bodies within the green plant cell. In 1883, Andreas Franz Wilhelm Schimper named these bodies as "chloroplastids" (Chloroplastiden). In 1884, Eduard Strasburger adopted the term "chloroplasts" (Chloroplasten). The word chloroplast is derived from the Greek words chloros (χλωρός), which means green, and plastes (πλάστης), which means "the one who forms". Endosymbiotic origin of chloroplasts Chloroplasts are one of many types of organelles in photosynthetic eukaryotic cells. They evolved from cyanobacteria through a process called organellogenesis. Cyanobacteria are a diverse phylum of gram-negative bacteria capable of carrying out oxygenic photosynthesis. Like chloroplasts, they have thylakoids. The thylakoid membranes contain photosynthetic pigments, including chlorophyll a. This origin of chloroplasts was first suggested by the Russian biologist Konstantin Mereschkowski in 1905 after Andreas Franz Wilhelm Schimper observed in 1883 that chloroplasts closely resemble cyanobacteria. Chloroplasts are only found in plants, algae, and some species of the amoeboid Paulinella. Mitochondria are thought to have come from a similar endosymbiosis event, where an aerobic prokaryote was engulfed. Primary endosymbiosis Approximately twobillion years ago, a free-living cyanobacterium entered an early eukaryotic cell, either as food or as an internal parasite, but managed to escape the phagocytic vacuole it was contained in and persist inside the cell. This event is called endosymbiosis, or "cell living inside another cell with a mutual benefit for both". The external cell is commonly referred to as the host while the internal cell is called the endosymbiont. The engulfed cyanobacteria provided an advantage to the host by providing sugar from photosynthesis. Over time, the cyanobacterium was assimilated, and many of its genes were lost or transferred to the nucleus of the host. Some of the cyanobacterial proteins were then synthesized by host cell and imported back into the chloroplast (formerly the cyanobacterium), allowing the host to control the chloroplast. Chloroplasts which can be traced back directly to a cyanobacterial ancestor (i.e. without a subsequent endosymbiotic event) are known as primary plastids ("plastid" in this context means almost the same thing as chloroplast). Chloroplasts that can be traced back to another photosynthetic eukaryotic endosymbiont are called secondary plastids or tertiary plastids (discussed below). Whether primary chloroplasts came from a single endosymbiotic event or multiple independent engulfments across various eukaryotic lineages was long debated. It is now generally held that with one exception (the amoeboid Paulinella chromatophora), chloroplasts arose from a single endosymbiotic event around twobillion years ago and these chloroplasts all share a single ancestor. It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is Gloeomargarita lithophora. Separately, somewhere about 90–140 million years ago, this process happened again in the amoeboid Paulinella with a cyanobacterium in the genus Prochlorococcus. This independently evolved chloroplast is often called a chromatophore instead of a chloroplast. Chloroplasts are believed to have arisen after mitochondria, since all eukaryotes contain mitochondria, but not all have chloroplasts. This is called serial endosymbiosis—where an early eukaryote engulfed the mitochondrion ancestor, and then descendants of it then engulfed the chloroplast ancestor, creating a cell with both chloroplasts and mitochondria. Secondary and tertiary endosymbiosis Many other organisms obtained chloroplasts from the primary chloroplast lineages through secondary endosymbiosis—engulfing a red or green alga with a primary chloroplast. These chloroplasts are known as secondary plastids. As a result of the secondary endosymbiotic event, secondary chloroplasts have additional membranes outside of the original two in primary chloroplasts. In secondary plastids, typically only the chloroplast, and sometimes its cell membrane and nucleus remain, forming a chloroplast with three or four membranes—the two cyanobacterial membranes, sometimes the eaten alga's cell membrane, and the phagosomal vacuole from the host's cell membrane. The genes in the phagocytosed eukaryote's nucleus are often transferred to the secondary host's nucleus. Cryptomonads and chlorarachniophytes retain the phagocytosed eukaryote's nucleus, an object called a nucleomorph, located between the second and third membranes of the chloroplast. All secondary chloroplasts come from green and red algae. No secondary chloroplasts from glaucophytes have been observed, probably because glaucophytes are relatively rare in nature, making them less likely to have been taken up by another eukaryote. Still other organisms, including the dinoflagellates Karlodinium and Karenia, obtained chloroplasts by engulfing an organism with a secondary plastid. These are called tertiary plastids. Primary chloroplast lineages All primary chloroplasts belong to one of four chloroplast lineages—the glaucophyte chloroplast lineage, the rhodophyte ("red") chloroplast lineage, and the chloroplastidan ("green") chloroplast lineage, the amoeboid Paulinella chromatophora lineage. The glaucophyte, rhodophyte, and chloroplastidian lineages are all descended from the same ancestral endosymbiotic event and are all within the group Archaeplastida. Glaucophyte chloroplasts The glaucophyte chloroplast group is the smallest of the three primary chloroplast lineages as there are only 25 described glaucophyte species. Glaucophytes diverged first before the red and green chloroplast lineages diverged. Because of this, they are sometimes considered intermediates between cyanobacteria and the red and green chloroplasts. This early divergence is supported by both phylogenetic studies and physical features present in glaucophyte chloroplasts and cyanobacteria, but not the red and green chloroplasts. First, glaucophyte chloroplasts have a peptidoglycan wall, a type of cell wall otherwise only in bacteria (including cyanobacteria). Second, glaucophyte chloroplasts contain concentric unstacked thylakoids which surround a carboxysome – an icosahedral structure that contains the enzyme RuBisCO responsible for carbon fixation. Third, starch created by the chloroplast is collected outside the chloroplast. Additionally, like cyanobacteria, both glaucophyte and rhodophyte thylakoids are studded with light collecting structures called phycobilisomes. Rhodophyta (red chloroplasts) The rhodophyte, or red algae, group is a large and diverse lineage. Rhodophyte chloroplasts are also called rhodoplasts, literally "red chloroplasts". Rhodoplasts have a double membrane with an intermembrane space and phycobilin pigments organized into phycobilisomes on the thylakoid membranes, preventing their thylakoids from stacking. Some contain pyrenoids. Rhodoplasts have chlorophyll a and phycobilins for photosynthetic pigments; the phycobilin phycoerythrin is responsible for giving many red algae their distinctive red color. However, since they also contain the blue-green chlorophyll a and other pigments, many are reddish to purple from the combination. The red phycoerytherin pigment is an adaptation to help red algae catch more sunlight in deep water—as such, some red algae that live in shallow water have less phycoerythrin in their rhodoplasts, and can appear more greenish. Rhodoplasts synthesize a form of starch called floridean starch, which collects into granules outside the rhodoplast, in the cytoplasm of the red alga. Chloroplastida (green chloroplasts) The chloroplastida group is another large, highly diverse lineage that includes both green algae and land plants. This group is also called Viridiplantae, which includes two core clades—Chlorophyta and Streptophyta. Most green chloroplasts are green in color, though some aren't due to accessory pigments that override the green from chlorophylls, such as in the resting cells of Haematococcus pluvialis. Green chloroplasts differ from glaucophyte and red algal chloroplasts in that they have lost their phycobilisomes, and contain chlorophyll b. They have also lost the peptidoglycan wall between their double membrane, leaving an intermembrane space. Some plants have kept some genes required the synthesis of peptidoglycan, but have repurposed them for use in chloroplast division instead. Chloroplastida lineages also keep their starch inside their chloroplasts. In plants and some algae, the chloroplast thylakoids are arranged in grana stacks. Some green algal chloroplasts, as well as those of hornworts, contain a structure called a pyrenoid, that concentrate RuBisCO and CO in the chloroplast, functionally similar to the glaucophyte carboxysome. There are some lineages of non-photosynthetic parasitic green algae that have lost their chloroplasts entirely, such as Prototheca, or have no chloroplast while retaining the separate chloroplast genome, as in Helicosporidium. Morphological and physiological similarities, as well as phylogenetics, confirm that these are lineages that ancestrally had chloroplasts but have since lost them. Paulinella chromatophora The photosynthetic amoeboids in the genus Paulinella—P. chromatophora, P. micropora, and marine P. longichromatophora—have the only known independently evolved chloroplast, often called a chromatophore. While all other chloroplasts originate from a single ancient endosymbiotic event, Paulinella independently acquired an endosymbiotic cyanobacterium from the genus Synechococcus around 90 - 140 million years ago. Each Paulinella cell contains one or two sausage-shaped chloroplasts; they were first described in 1894 by German biologist Robert Lauterborn. The chromatophore is highly reduced compared to its free-living cyanobacterial relatives and has limited functions. For example, it has a genome of about 1 million base pairs, one third the size of Synechococcus genomes, and only encodes around 850 proteins. However, this is still much larger than other chloroplast genomes, which are typically around 150,000 base pairs. Chromatophores have also transferred much less of their DNA to the nucleus of their hosts. About 0.3–0.8% of the nuclear DNA in Paulinella is from the chromatophore, compared with 11–14% from the chloroplast in plants. Similar to other chloroplasts, Paulinella provides specific proteins to the chromatophore using a specific targeting sequence. Because chromatophores are much younger compared to the canoncial chloroplasts, Paulinella chromatophora is studied to understand how early chloroplasts evolved. Secondary and tertiary chloroplast lineages Green algal derived chloroplasts Green algae have been taken up by many groups in three or four separate events. Primarily, secondary chloroplasts derived from green algae are in the euglenids and chlorarachniophytes. They are also found in one lineage of dinoflagellates and possibly the ancestor of the CASH lineage (cryptomonads, alveolates, stramenopiles and haptophytes) Many green algal derived chloroplasts contain pyrenoids, but unlike chloroplasts in their green algal ancestors, storage product collects in granules outside the chloroplast. Euglenophytes The euglenophytes are a group of common flagellated protists that contain chloroplasts derived from a green alga. Euglenophytes are the only group outside Diaphoretickes that have chloroplasts without performing kleptoplasty. Euglenophyte chloroplasts have three membranes. It is thought that the membrane of the primary endosymbiont host was lost (e.g. the green algal membrane), leaving the two cyanobacterial membranes and the secondary host's phagosomal membrane. Euglenophyte chloroplasts have a pyrenoid and thylakoids stacked in groups of three. The carbon fixed through photosynthesis is stored in the form of paramylon, which is contained in membrane-bound granules in the cytoplasm of the euglenophyte. Chlorarachniophytes Chlorarachniophytes are a rare group of organisms that also contain chloroplasts derived from green algae, though their story is more complicated than that of the euglenophytes. The ancestor of chlorarachniophytes is thought to have been a eukaryote with a red algal derived chloroplast. It is then thought to have lost its first red algal chloroplast, and later engulfed a green alga, giving it its second, green algal derived chloroplast. Chlorarachniophyte chloroplasts are bounded by four membranes, except near the cell membrane, where the chloroplast membranes fuse into a double membrane. Their thylakoids are arranged in loose stacks of three. Chlorarachniophytes have a form of polysaccharide called chrysolaminarin, which they store in the cytoplasm, often collected around the chloroplast pyrenoid, which bulges into the cytoplasm. Chlorarachniophyte chloroplasts are notable because the green alga they are derived from has not been completely broken down—its nucleus still persists as a nucleomorph found between the second and third chloroplast membranes—the periplastid space, which corresponds to the green alga's cytoplasm. Prasinophyte-derived chloroplast Dinoflagellates in the genus Lepidodinium have lost their original peridinin chloroplast and replaced it with a green algal derived chloroplast (more specifically, a prasinophyte). Lepidodinium is the only dinoflagellate that has a chloroplast that's not from the rhodoplast lineage. The chloroplast is surrounded by two membranes and has no nucleomorph—all the nucleomorph genes have been transferred to the dinophyte nucleus. The endosymbiotic event that led to this chloroplast was serial secondary endosymbiosis rather than tertiary endosymbiosis—the endosymbiont was a green alga containing a primary chloroplast (making a secondary chloroplast). Red algal derived chloroplasts Secondary chloroplasts derived from red algae appear to have only been taken up only once, which then diversified into a large group called chromalveolates. Today they are found in the haptophytes, cryptomonads, heterokonts, dinoflagellates and apicomplexans (the CASH lineage). Red algal secondary chloroplasts usually contain chlorophyll c and are surrounded by four membranes. Cryptophytes Cryptophytes, or cryptomonads, are a group of algae that contain a red-algal derived chloroplast. Cryptophyte chloroplasts contain a nucleomorph that superficially resembles that of the chlorarachniophytes. Cryptophyte chloroplasts have four membranes. The outermost membrane is continuous with the rough endoplasmic reticulum. They synthesize ordinary starch, which is stored in granules found in the periplastid space—outside the original double membrane, in the place that corresponds to the ancestral red alga's cytoplasm. Inside cryptophyte chloroplasts is a pyrenoid and thylakoids in stacks of two. Cryptophyte chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in the thylakoid space, rather than anchored on the outside of their thylakoid membranes. Cryptophytes may have played a key role in the spreading of red algal based chloroplasts. Haptophytes Haptophytes are similar and closely related to cryptophytes or heterokontophytes. Their chloroplasts lack a nucleomorph, their thylakoids are in stacks of three, and they synthesize chrysolaminarin sugar, which are stored in granules completely outside of the chloroplast, in the cytoplasm of the haptophyte. Stramenopiles (heterokontophytes) The stramenopiles, also known as heterokontophytes, are a very large and diverse group of eukaryotes. It inlcludes Ochrophyta—which includes diatoms, brown algae (seaweeds), and golden algae (chrysophytes)— and Xanthophyceae (also called yellow-green algae). Heterokont chloroplasts are very similar to haptophyte chloroplasts. They have a pyrenoid, triplet thylakoids, and, with some exceptions, four layer plastidic envelope with the outermost membrane connected to the endoplasmic reticulum. Like haptophytes, stramenopiles store sugar in chrysolaminarin granules in the cytoplasm. Stramenopile chloroplasts contain chlorophyll a and, with a few exceptions, chlorophyll c. They also have carotenoids which give them their many colors. Apicomplexans, chromerids, and dinophytes The alveolates are a major clade of unicellular eukaryotes of both autotrophic and heterotrophic members. Many members contain a red-algal derived plastid. One notable characteristic of this diverse group is the frequent loss of photosynthesis. However, a majority of these heterotrophs continue to process a non-photosynthetic plastid. Apicomplexans Apicomplexans are a group of alveolates. Like the helicosproidia, they're parasitic, and have a nonphotosynthetic chloroplast. They were once thought to be related to the helicosproidia, but it is now known that the helicosproida are green algae rather than part of the CASH lineage. The apicomplexans include Plasmodium, the malaria parasite. Many apicomplexans keep a vestigial red algal derived chloroplast called an apicoplast, which they inherited from their ancestors. Apicoplasts have lost all photosynthetic function, and contain no photosynthetic pigments or true thylakoids. They are bounded by four membranes, but the membranes are not connected to the endoplasmic reticulum. Other apicomplexans like Cryptosporidium have lost the chloroplast completely. Apicomplexans store their energy in amylopectin granules that are located in their cytoplasm, even though they are nonphotosynthetic. The fact that apicomplexans still keep their nonphotosynthetic chloroplast around demonstrates how the chloroplast carries out important functions other than photosynthesis. Plant chloroplasts provide plant cells with many important things besides sugar, and apicoplasts are no different—they synthesize fatty acids, isopentenyl pyrophosphate, iron-sulfur clusters, and carry out part of the heme pathway. The most important apicoplast function is isopentenyl pyrophosphate synthesis—in fact, apicomplexans die when something interferes with this apicoplast function, and when apicomplexans are grown in an isopentenyl pyrophosphate-rich medium, they dump the organelle. Chromerids The Chromerida is a newly discovered group of algae from Australian corals which comprises some close photosynthetic relatives of the apicomplexans. The first member, Chromera velia, was discovered and first isolated in 2001. The discovery of Chromera velia with similar structure to the apicomplexans, provides an important link in the evolutionary history of the apicomplexans and dinophytes. Their plastids have four membranes, lack chlorophyll c and use the type II form of RuBisCO obtained from a horizontal transfer event. Dinoflagellates The dinoflagellates are yet another very large and diverse group, around half of which are at least partially photosynthetic (i.e. mixotrophic). Dinoflagellate chloroplasts have relatively complex history. Most dinoflagellate chloroplasts are secondary red algal derived chloroplasts. Many dinoflagellates have lost the chloroplast (becoming nonphotosynthetic), some of these have replaced it though tertiary endosymbiosis. Others replaced their original chloroplast with a green algal derived chloroplast. The peridinin chloroplast is thought to be the dinophytes' "original" chloroplast, which has been lost, reduced, replaced, or has company in several other dinophyte lineages. The most common dinophyte chloroplast is the peridinin-type chloroplast, characterized by the carotenoid pigment peridinin in their chloroplasts, along with chlorophyll a and chlorophyll c2. Peridinin is not found in any other group of chloroplasts. The peridinin chloroplast is bounded by three membranes (occasionally two), having lost the red algal endosymbiont's original cell membrane. The outermost membrane is not connected to the endoplasmic reticulum. They contain a pyrenoid, and have triplet-stacked thylakoids. Starch is found outside the chloroplast. Peridinin chloroplasts also have DNA that is highly reduced and fragmented into many small circles. Most of the genome has migrated to the nucleus, and only critical photosynthesis-related genes remain in the chloroplast. Most dinophyte chloroplasts contain form II RuBisCO, at least the photosynthetic pigments chlorophyll a, chlorophyll c2, beta-carotene, and at least one dinophyte-unique xanthophyll (peridinin, dinoxanthin, or diadinoxanthin), giving many a golden-brown color. All dinophytes store starch in their cytoplasm, and most have chloroplasts with thylakoids arranged in stacks of three. Tertiary chloroplasts (haptophyte-derived) The fucoxanthin dinophyte lineages (including Karlodinium and Karenia) lost their original red algal derived chloroplast, and replaced it with a new chloroplast derived from a haptophyte endosymbiont, making these tertiary plastids. Karlodinium and Karenia probably took up different heterokontophytes. Because the haptophyte chloroplast has four membranes, tertiary endosymbiosis would be expected to create a six membraned chloroplast, adding the haptophyte's cell membrane and the dinophyte's phagosomal vacuole. However, the haptophyte was heavily reduced, stripped of a few membranes and its nucleus, leaving only its chloroplast (with its original double membrane), and possibly one or two additional membranes around it. Fucoxanthin-containing chloroplasts are characterized by having the pigment fucoxanthin (actually 19′-hexanoyloxy-fucoxanthin and/or 19′-butanoyloxy-fucoxanthin) and no peridinin. Fucoxanthin is also found in haptophyte chloroplasts, providing evidence of ancestry. "Dinotoms" diatom-derived dinophyte chloroplasts Some dinophytes, like Kryptoperidinium and Durinskia, have a diatom (heterokontophyte)-derived chloroplast. These chloroplasts are bounded by up to five membranes, (depending on whether the entire diatom endosymbiont is counted as the chloroplast, or just the red algal derived chloroplast inside it). The diatom endosymbiont has been reduced relatively little—it still retains its original mitochondria, and has endoplasmic reticulum, ribosomes, a nucleus, and of course, red algal derived chloroplasts—practically a complete cell, all inside the host's endoplasmic reticulum lumen. However the diatom endosymbiont can't store its own food—its storage polysaccharide is found in granules in the dinophyte host's cytoplasm instead. The diatom endosymbiont's nucleus is present, but it probably can't be called a nucleomorph because it shows no sign of genome reduction, and might have even been expanded. Diatoms have been engulfed by dinoflagellates at least three times. The diatom endosymbiont is bounded by a single membrane, inside it are chloroplasts with four membranes. Like the diatom endosymbiont's diatom ancestor, the chloroplasts have triplet thylakoids and pyrenoids. In some of these genera, the diatom endosymbiont's chloroplasts aren't the only chloroplasts in the dinophyte. The original three-membraned peridinin chloroplast is still around, converted to an eyespot. Kleptoplasty In some groups of mixotrophic protists, like some dinoflagellates (e.g. Dinophysis), chloroplasts are separated from a captured alga and used temporarily. These klepto chloroplasts may only have a lifetime of a few days and are then replaced. Cryptophyte-derived dinophyte chloroplast Members of the genus Dinophysis have a phycobilin-containing chloroplast taken from a cryptophyte. However, the cryptophyte is not an endosymbiont—only the chloroplast seems to have been taken, and the chloroplast has been stripped of its nucleomorph and outermost two membranes, leaving just a two-membraned chloroplast. Cryptophyte chloroplasts require their nucleomorph to maintain themselves, and Dinophysis species grown in cell culture alone cannot survive, so it is possible (but not confirmed) that the Dinophysis chloroplast is a kleptoplast—if so, Dinophysis chloroplasts wear out and Dinophysis species must continually engulf cryptophytes to obtain new chloroplasts to replace the old ones. Chloroplast DNA Chloroplasts, like other endosymbiotic organelles, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA (cpDNA) was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. Chloroplast DNA was first sequenced in 1986. Since then, hundreds of chloroplast genomes from various species have been sequenced, but they are mostly those of land plants and green algae—glaucophytes, red algae, and other algal groups are extremely underrepresented, potentially introducing some bias in views of "typical" chloroplast DNA structure and content. Molecular structure With few exceptions, most chloroplasts have their entire chloroplast genome combined into a single large circular DNA molecule, typically 120,000–170,000 base pairs long and a mass of about 80–130 million daltons. While chloroplast genomes can almost always be assembled into a circular map, the physical DNA molecules inside cells take on a variety of linear and branching forms. New chloroplasts may contain up to 100 copies of their genome, though the number of copies decreases to about 15–20 as the chloroplasts age. Chloroplast DNA is usually condensed into nucleoids, which can contain multiple copies of the chloroplast genome. Many nucleoids can be found in each chloroplast. In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of the chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma. Chloroplast DNA is not associated with true histones, proteins that are used to pack DNA molecules tightly in eukaryote nuclei. Though in red algae, similar proteins tightly pack each chloroplast DNA ring in a nucleoid. Many chloroplast genomes contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC). A given pair of inverted repeats are rarely identical, but they are always very similar to each other, apparently resulting from concerted evolution. The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each and containing as few as four or as many as over 150 genes. The inverted repeat regions are highly conserved in land plants, and accumulate few mutations. Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceae), suggesting that they predate the chloroplast. Some chloroplast genomes have since lost or flipped the inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast genomes which have lost some of the inverted repeat segments tend to get rearranged more. DNA repair and replication In chloroplasts of the moss Physcomitrella patens, the DNA mismatch repair protein Msh1 interacts with the recombinational repair proteins RecA and RecG to maintain chloroplast genome stability. In chloroplasts of the plant Arabidopsis thaliana the RecA protein maintains the integrity of the chloroplast's DNA by a process that likely involves the recombinational repair of DNA damage. The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Transcription starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to transcribe the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes. In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine. Hypoxanthine can bind to cytosine, and when the XC base pair is replicated, it becomes a GC (thus, an A → G base change). In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures. One of competing model for cpDNA replication asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to the linear and circular DNA structures of bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more species still contain complex structures that scientists do not yet understand. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not expand the multiple A --> G gradients seen in plastomes. Because of the failure to explain the deamination gradient as well as the numerous plant species that have been shown to have circular cpDNA, the predominant theory continues to hold that most cpDNA is circular and most likely replicates via a D loop mechanism. Gene content and protein synthesis The ancestral cyanobacteria that led to chloroplasts probably had a genome that contained over 3000 genes, but only approximately 100 genes remain in contemporary chloroplast genomes. These genes code for a variety of things, mostly to do with the protein pipeline and photosynthesis. As in prokaryotes, genes in chloroplast DNA are organized into operons. Unlike prokaryotic DNA molecules, chloroplast DNA molecules contain introns (plant mitochondrial DNAs do too, but not human mtDNAs). Among land plants, the contents of the chloroplast genome are fairly similar. Chloroplast genome reduction and gene transfer Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer. As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. Recently, a plastid without a genome was found, demonstrating chloroplasts can lose their genome during endosymbiotic the gene transfer process. Endosymbiotic gene transfer is how we know about the lost chloroplasts in many CASH lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast. In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants. Of the approximately 3000 proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling. Recent research indicates that parts of the retrograde signaling network once considered characteristic for land plants emerged already in an algal progenitor, integrating into co-expressed cohorts of genes in the closest algal relatives of land plants. Protein synthesis Protein synthesis within chloroplasts relies on two RNA polymerases. One is coded by the chloroplast DNA, the other is of nuclear origin. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes. Protein targeting and import Because so many chloroplast genes have been moved to the nucleus, many proteins that would originally have been translated in the chloroplast are now synthesized in the cytoplasm of the plant cell. These proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes. Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway, though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell because to reach the chloroplast from the cytosol, the cell membrane must be crossed, which signifies entrance into the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway. Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle. In most, but not all cases, nuclear-encoded chloroplast proteins are translated with a cleavable transit peptide that's added to the N-terminus of the protein precursor. Sometimes the transit sequence is found on the C-terminus of the protein, or within the functional part of the protein. Transport proteins and membrane translocons After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, an enzyme specific to chloroplast proteins phosphorylates, or adds a phosphate group to many (but not all) of them in their transit sequences. Phosphorylation helps many proteins bind the polypeptide, keeping it from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized by the chloroplast. These proteins also help the polypeptide get imported into the chloroplast. From here, chloroplast proteins bound for the stroma must pass through two protein complexes—the TOC complex, or translocon on the outer chloroplast membrane, and the TIC translocon, or translocon on the inner chloroplast membrane translocon. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space. Structure In land plants, chloroplasts are generally lens-shaped, 3–10 μm in diameter and 1–3 μm thick. Corn seedling chloroplasts are ≈20 μm3 in volume. Greater diversity in chloroplast shapes exists among the algae, which often contain a single chloroplast that can be shaped like a net (e.g., Oedogonium), a cup (e.g., Chlamydomonas), a ribbon-like spiral around the edges of the cell (e.g., Spirogyra), or slightly twisted bands at the cell edges (e.g., Sirogonium). Some algae have two chloroplasts in each cell; they are star-shaped in Zygnema, or may follow the shape of half the cell in order Desmidiales. In some algae, the chloroplast takes up most of the cell, with pockets for the nucleus and other organelles, for example, some species of Chlorella have a cup-shaped chloroplast that occupies much of the cell. All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. The two innermost lipid-bilayer membranes that surround all chloroplasts correspond to the outer and inner membranes of the ancestral cyanobacterium's gram negative cell wall, and not the phagosomal membrane from the host, which was probably lost. Chloroplasts that are the product of secondary endosymbiosis may have additional membranes surrounding these three. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats. There are some common misconceptions about the outer and inner chloroplast membranes. The fact that chloroplasts are surrounded by a double membrane is often cited as evidence that they are the descendants of endosymbiotic cyanobacteria. This is often interpreted as meaning the outer chloroplast membrane is the product of the host's cell membrane infolding to form a vesicle to surround the ancestral cyanobacterium—which is not true—both chloroplast membranes are homologous to the cyanobacterium's original double membranes. The chloroplast double membrane is also often compared to the mitochondrial double membrane. This is not a valid comparison—the inner mitochondria membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. The only chloroplast structure that can considered analogous to it is the internal thylakoid system. Even so, in terms of "in-out", the direction of chloroplast H ion flow is in the opposite direction compared to oxidative phosphorylation in mitochondria. In addition, in terms of function, the inner chloroplast membrane, which regulates metabolite passage and synthesizes some materials, has no counterpart in the mitochondrion. Outer chloroplast membrane The outer chloroplast membrane is a semi-porous membrane that small molecules and ions can easily diffuse across. However, it is not permeable to larger proteins, so chloroplast polypeptides being synthesized in the cell cytoplasm must be transported across the outer chloroplast membrane by the TOC complex, or translocon on the outer chloroplast membrane. The chloroplast membranes sometimes protrude out into the cytoplasm, forming a stromule, or stroma-containing tubule. Stromules are very rare in chloroplasts, and are much more common in other plastids like chromoplasts and amyloplasts in petals and roots, respectively. They may exist to increase the chloroplast's surface area for cross-membrane transport, because they are often branched and tangled with the endoplasmic reticulum. When they were first observed in 1962, some plant biologists dismissed the structures as artifactual, claiming that stromules were just oddly shaped chloroplasts with constricted regions or dividing chloroplasts. However, there is a growing body of evidence that stromules are functional, integral features of plant cell plastids, not merely artifacts. Intermembrane space and peptidoglycan wall Usually, a thin intermembrane space about 10–20 nanometers thick exists between the outer and inner chloroplast membranes. Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called muroplasts (from Latin "mura", meaning "wall"). Other chloroplasts were assumed to have lost the cyanobacterial wall, leaving an intermembrane space between the two chloroplast envelope membranes, but has since been found also in moss, lycophytes and ferns. Inner chloroplast membrane The inner chloroplast membrane borders the stroma and regulates passage of materials in and out of the chloroplast. After passing through the TOC complex in the outer chloroplast membrane, polypeptides must pass through the TIC complex (translocon on the inner chloroplast membrane) which is located in the inner chloroplast membrane. In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized. Peripheral reticulum Some chloroplasts contain a structure called the chloroplast peripheral reticulum. It is often found in the chloroplasts of plants, though it has also been found in some angiosperms, and even some gymnosperms. The chloroplast peripheral reticulum consists of a maze of membranous tubes and vesicles continuous with the inner chloroplast membrane that extends into the internal stromal fluid of the chloroplast. Its purpose is thought to be to increase the chloroplast's surface area for cross-membrane transport between its stroma and the cell cytoplasm. The small vesicles sometimes observed may serve as transport vesicles to shuttle stuff between the thylakoids and intermembrane space. Stroma The protein-rich, alkaline, aqueous fluid within the inner chloroplast membrane and outside of the thylakoid space is called the stroma, which corresponds to the cytosol of the original cyanobacterium. Nucleoids of chloroplast DNA, chloroplast ribosomes, the thylakoid system with plastoglobuli, starch granules, and many proteins can be found floating around in it. The Calvin cycle, which fixes CO into G3P takes place in the stroma. Chloroplast ribosomes Chloroplasts have their own ribosomes, which they use to synthesize a small fraction of their proteins. Chloroplast ribosomes are about two-thirds the size of cytoplasmic ribosomes (around 17 nm vs 25 nm). They take mRNAs transcribed from the chloroplast DNA and translate them into protein. While similar to bacterial ribosomes, chloroplast translation is more complex than in bacteria, so chloroplast ribosomes include some chloroplast-unique features. Small subunit ribosomal RNAs in several Chlorophyta and euglenid chloroplasts lack motifs for Shine-Dalgarno sequence recognition, which is considered essential for translation initiation in most chloroplasts and prokaryotes. Such loss is also rarely observed in other plastids and prokaryotes. An additional 4.5S rRNA with homology to the 3' tail of 23S is found in "higher" plants. Plastoglobuli Plastoglobuli (singular plastoglobulus, sometimes spelled plastoglobule(s)), are spherical bubbles of lipids and proteins about 45–60 nanometers across. They are surrounded by a lipid monolayer. Plastoglobuli are found in all chloroplasts, but become more common when the chloroplast is under oxidative stress, or when it ages and transitions into a gerontoplast. Plastoglobuli also exhibit a greater size variation under these conditions. They are also common in etioplasts, but decrease in number as the etioplasts mature into chloroplasts. Plastoglobuli contain both structural proteins and enzymes involved in lipid synthesis and metabolism. They contain many types of lipids including plastoquinone, vitamin E, carotenoids and chlorophylls. Plastoglobuli were once thought to be free-floating in the stroma, but it is now thought that they are permanently attached either to a thylakoid or to another plastoglobulus attached to a thylakoid, a configuration that allows a plastoglobulus to exchange its contents with the thylakoid network. In normal green chloroplasts, the vast majority of plastoglobuli occur singularly, attached directly to their parent thylakoid. In old or stressed chloroplasts, plastoglobuli tend to occur in linked groups or chains, still always anchored to a thylakoid. Plastoglobuli form when a bubble appears between the layers of the lipid bilayer of the thylakoid membrane, or bud from existing plastoglobuli—though they never detach and float off into the stroma. Practically all plastoglobuli form on or near the highly curved edges of the thylakoid disks or sheets. They are also more common on stromal thylakoids than on granal ones. Starch granules Starch granules are very common in chloroplasts, typically taking up 15% of the organelle's volume, though in some other plastids like amyloplasts, they can be big enough to distort the shape of the organelle. Starch granules are simply accumulations of starch in the stroma, and are not bounded by a membrane. Starch granules appear and grow throughout the day, as the chloroplast synthesizes sugars, and are consumed at night to fuel respiration and continue sugar export into the phloem, though in mature chloroplasts, it is rare for a starch granule to be completely consumed or for a new granule to accumulate. Starch granules vary in composition and location across different chloroplast lineages. In red algae, starch granules are found in the cytoplasm rather than in the chloroplast. In plants, mesophyll chloroplasts, which do not synthesize sugars, lack starch granules. RuBisCO The chloroplast stroma contains many proteins, though the most common and important is RuBisCO, which is probably also the most abundant protein on the planet. RuBisCO is the enzyme that fixes CO into sugar molecules. In plants, RuBisCO is abundant in all chloroplasts, though in plants, it is confined to the bundle sheath chloroplasts, where the Calvin cycle is carried out in plants. Pyrenoids The chloroplasts of some hornworts and algae contain structures called pyrenoids. They are not found in higher plants. Pyrenoids are roughly spherical and highly refractive bodies which are a site of starch accumulation in plants that contain them. They consist of a matrix opaque to electrons, surrounded by two hemispherical starch plates. The starch is accumulated as the pyrenoids mature. In algae with carbon concentrating mechanisms, the enzyme RuBisCO is found in the pyrenoids. Starch can also accumulate around the pyrenoids when CO2 is scarce. Pyrenoids can divide to form new pyrenoids, or be produced "de novo". Thylakoid system Thylakoids (sometimes spelled thylakoïds), are small interconnected sacks which contain the membranes that the light reactions of photosynthesis take place on. The word thylakoid comes from the Greek word thylakos which means "sack". Suspended within the chloroplast stroma is the thylakoid system, a highly dynamic collection of membranous sacks called thylakoids where chlorophyll is found and the light reactions of photosynthesis happen. In most vascular plant chloroplasts, the thylakoids are arranged in stacks called grana, though in certain plant chloroplasts and some algal chloroplasts, the thylakoids are free floating. Thylakoid structure Using a light microscope, it is just barely possible to see tiny green granules—which were named grana. With electron microscopy, it became possible to see the thylakoid system in more detail, revealing it to consist of stacks of flat thylakoids which made up the grana, and long interconnecting stromal thylakoids which linked different grana. In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick. The three-dimensional structure of the thylakoid membrane system has been disputed. Many models have been proposed, the most prevalent being the helical model, in which granum stacks of thylakoids are wrapped by helical stromal thylakoids. Another model known as the 'bifurcation model', which was based on the first electron tomography study of plant thylakoid membranes, depicts the stromal membranes as wide lamellar sheets perpendicular to the grana columns which bifurcates into multiple parallel discs forming the granum-stroma assembly. The helical model was supported by several additional works, but ultimately it was determined in 2019 that features from both the helical and bifurcation models are consolidated by newly discovered left-handed helical membrane junctions. Likely for ease, the thylakoid system is still commonly depicted by older "hub and spoke" models where the grana are connected to each other by tubes of stromal thylakoids. Grana consist of a stacks of flattened circular granal thylakoids that resemble pancakes. Each granum can contain anywhere from two to a hundred thylakoids, though grana with 10–20 thylakoids are most common. Wrapped around the grana are multiple parallel right-handed helical stromal thylakoids, also known as frets or lamellar thylakoids. The helices ascend at an angle of ~20°, connecting to each granal thylakoid at a bridge-like slit junction. The stroma lamellae extend as large sheets perpendicular to the grana columns. These sheets are connected to the right-handed helices either directly or through bifurcations that form left-handed helical membrane surfaces. The left-handed helical surfaces have a similar tilt angle to the right-handed helices (~20°), but ¼ the pitch. Approximately 4 left-handed helical junctions are present per granum, resulting in a pitch-balanced array of right- and left-handed helical membrane surfaces of different radii and pitch that consolidate the network with minimal surface and bending energies. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth. Thylakoid composition Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. Photosystem II and photosystem I contain light-harvesting complexes with chlorophyll and carotenoids that absorb light energy and use it to energize electrons. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. ATP synthase is a large protein complex that harnesses the concentration gradient of the hydrogen ions in the thylakoid space to generate ATP energy as the hydrogen ions flow back out into the stroma—much like a dam turbine. There are two types of thylakoids—granal thylakoids, which are arranged in grana, and stromal thylakoids, which are in contact with the stroma. Granal thylakoids are pancake-shaped circular disks about 300–600 nanometers in diameter. Stromal thylakoids are helicoid sheets that spiral around grana. The flat tops and bottoms of granal thylakoids contain only the relatively flat photosystem II protein complex. This allows them to stack tightly, forming grana with many layers of tightly appressed membrane, called granal membrane, increasing stability and surface area for light capture. In contrast, photosystem I and ATP synthase are large protein complexes which jut out into the stroma. They can't fit in the appressed granal membranes, and so are found in the stromal thylakoid membrane—the edges of the granal thylakoid disks and the stromal thylakoids. These large protein complexes may act as spacers between the sheets of stromal thylakoids. The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal. Pigments and chloroplast colors Inside the photosystems embedded in chloroplast thylakoid membranes are various photosynthetic pigments, which absorb and transfer light energy. The types of pigments found are different in various groups of chloroplasts, and are responsible for a wide variety of chloroplast colorations. Other plastid types, such as the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis. Paper chromatography of some spinach leaf extract shows the various pigments present in their chloroplasts. Xanthophylls Chlorophyll a Chlorophyll b Chlorophylls Chlorophyll a is found in all chloroplasts, as well as their cyanobacterial ancestors. Chlorophyll a is a blue-green pigment partially responsible for giving most cyanobacteria and chloroplasts their color. Other forms of chlorophyll exist, such as the accessory pigments chlorophyll b, chlorophyll c, chlorophyll d, and chlorophyll f. Chlorophyll b is an olive green pigment found only in the chloroplasts of plants, green algae, any secondary chloroplasts obtained through the secondary endosymbiosis of a green alga, and a few cyanobacteria. It is the chlorophylls a and b together that make most plant and green algal chloroplasts green. Chlorophyll c is mainly found in secondary endosymbiotic chloroplasts that originated from a red alga, although it is not found in chloroplasts of red algae themselves. Chlorophyll c is also found in some green algae and cyanobacteria. Chlorophylls d and f are pigments found only in some cyanobacteria. Carotenoids In addition to chlorophylls, another group of yellow–orange pigments called carotenoids are also found in the photosystems. There are about thirty photosynthetic carotenoids. They help transfer and dissipate excess energy, and their bright colors sometimes override the chlorophyll green, like during the fall, when the leaves of some land plants change color. β-carotene is a bright red-orange carotenoid found in nearly all chloroplasts, like chlorophyll a. Xanthophylls, especially the orange-red zeaxanthin, are also common. Many other forms of carotenoids exist that are only found in certain groups of chloroplasts. Phycobilins Phycobilins are a third group of pigments found in cyanobacteria, and glaucophyte, red algal, and cryptophyte chloroplasts. Phycobilins come in all colors, though phycoerytherin is one of the pigments that makes many red algae red. Phycobilins often organize into relatively large protein complexes about 40 nanometers across called phycobilisomes. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. Cryptophyte chloroplasts and some cyanobacteria don't have their phycobilin pigments organized into phycobilisomes, and keep them in their thylakoid space instead. Specialized chloroplasts in plants To fix carbon dioxide into sugar molecules in the process of photosynthesis, chloroplasts use an enzyme called RuBisCO. RuBisCO has trouble distinguishing between carbon dioxide and oxygen, so at high oxygen concentrations, RuBisCO starts accidentally adding oxygen to sugar precursors. This has the result of ATP energy being wasted and being released, all with no sugar being produced. This is a big problem, since O is produced by the initial light reactions of photosynthesis, causing issues down the line in the Calvin cycle which uses RuBisCO. plants evolved a way to solve this—by spatially separating the light reactions and the Calvin cycle. The light reactions, which store light energy in ATP and NADPH, are done in the mesophyll cells of a leaf. The Calvin cycle, which uses the stored energy to make sugar using RuBisCO, is done in the bundle sheath cells, a layer of cells surrounding a vein in a leaf. As a result, chloroplasts in mesophyll cells and bundle sheath cells are specialized for each stage of photosynthesis. In mesophyll cells, chloroplasts are specialized for the light reactions, so they lack RuBisCO, and have normal grana and thylakoids, which they use to make ATP and NADPH, as well as oxygen. They store in a four-carbon compound, which is why the process is called photosynthesis. The four-carbon compound is then transported to the bundle sheath chloroplasts, where it drops off and returns to the mesophyll. Bundle sheath chloroplasts do not carry out the light reactions, preventing oxygen from building up in them and disrupting RuBisCO activity. Because of this, they lack thylakoids organized into grana stacks—though bundle sheath chloroplasts still have free-floating thylakoids in the stroma where they still carry out cyclic electron flow, a light-driven method of synthesizing ATP to power the Calvin cycle without generating oxygen. They lack photosystem II, and only have photosystem I—the only protein complex needed for cyclic electron flow. Because the job of bundle sheath chloroplasts is to carry out the Calvin cycle and make sugar, they often contain large starch grains. Both types of chloroplast contain large amounts of chloroplast peripheral reticulum, which they use to get more surface area to transport stuff in and out of them. Mesophyll chloroplasts have a little more peripheral reticulum than bundle sheath chloroplasts. Function and chemistry Guard cell chloroplasts Unlike most epidermal cells, the guard cells of plant stomata contain relatively well-developed chloroplasts. However, exactly what they do is controversial. Plant innate immunity Plants lack specialized immune cells—all plant cells participate in the plant immune response. Chloroplasts, along with the nucleus, cell membrane, and endoplasmic reticulum, are key players in pathogen defense. Due to its role in a plant cell's immune response, pathogens frequently target the chloroplast. Plants have two main immune responses—the hypersensitive response, in which infected cells seal themselves off and undergo programmed cell death, and systemic acquired resistance, where infected cells release signals warning the rest of the plant of a pathogen's presence. Chloroplasts stimulate both responses by purposely damaging their photosynthetic system, producing reactive oxygen species. High levels of reactive oxygen species will cause the hypersensitive response. The reactive oxygen species also directly kill any pathogens within the cell. Lower levels of reactive oxygen species initiate systemic acquired resistance, triggering defense-molecule production in the rest of the plant. In some plants, chloroplasts are known to move closer to the infection site and the nucleus during an infection. Chloroplasts can serve as cellular sensors. After detecting stress in a cell, which might be due to a pathogen, chloroplasts begin producing molecules like salicylic acid, jasmonic acid, nitric oxide and reactive oxygen species which can serve as defense-signals. As cellular signals, reactive oxygen species are unstable molecules, so they probably don't leave the chloroplast, but instead pass on their signal to an unknown second messenger molecule. All these molecules initiate retrograde signaling—signals from the chloroplast that regulate gene expression in the nucleus. In addition to defense signaling, chloroplasts, with the help of the peroxisomes, help synthesize an important defense molecule, jasmonate. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate. Photosynthesis One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Water (H2O) and carbon dioxide (CO2) are used in photosynthesis, and sugar and oxygen (O2) are made, using light energy. Photosynthesis is divided into two stages—the light reactions, where water is split to produce oxygen, and the dark reactions, or Calvin cycle, which builds sugar molecules from carbon dioxide. The two phases are linked by the energy carriers adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADP+). Light reactions The light reactions take place on the thylakoid membranes. They take light energy and store it in NADPH, a form of NADP+, and ATP to fuel the dark reactions. Energy carriers ATP is the phosphorylated version of adenosine diphosphate (ADP), which stores energy in a cell and powers most cellular activities. ATP is the energized form, while ADP is the (partially) depleted form. NADP+ is an electron carrier which ferries high energy electrons. In the light reactions, it gets reduced, meaning it picks up electrons, becoming NADPH. Photophosphorylation Like mitochondria, chloroplasts use the potential energy stored in an H+, or hydrogen ion, gradient to generate ATP energy. The two photosystems capture light energy to energize electrons taken from water, and release them down an electron transport chain. The molecules between the photosystems harness the electrons' energy to pump hydrogen ions into the thylakoid space, creating a concentration gradient, with more hydrogen ions (up to a thousand times as many) inside the thylakoid system than in the stroma. The hydrogen ions in the thylakoid space then diffuse back down their concentration gradient, flowing back out into the stroma through ATP synthase. ATP synthase uses the energy from the flowing hydrogen ions to phosphorylate adenosine diphosphate into adenosine triphosphate, or ATP. Because chloroplast ATP synthase projects out into the stroma, the ATP is synthesized there, in position to be used in the dark reactions. NADP+ reduction Electrons are often removed from the electron transport chains to charge NADP+ with electrons, reducing it to NADPH. Like ATP synthase, ferredoxin-NADP+ reductase, the enzyme that reduces NADP+, releases the NADPH it makes into the stroma, right where it is needed for the dark reactions. Because NADP+ reduction removes electrons from the electron transport chains, they must be replaced—the job of photosystem II, which splits water molecules (H2O) to obtain the electrons from its hydrogen atoms. Cyclic photophosphorylation While photosystem II photolyzes water to obtain and energize new electrons, photosystem I simply reenergizes depleted electrons at the end of an electron transport chain. Normally, the reenergized electrons are taken by NADP+, though sometimes they can flow back down more H+-pumping electron transport chains to transport more hydrogen ions into the thylakoid space to generate more ATP. This is termed cyclic photophosphorylation because the electrons are recycled. Cyclic photophosphorylation is common in plants, which need more ATP than NADPH. Dark reactions The Calvin cycle, also known as the dark reactions, is a series of biochemical reactions that fixes CO2 into G3P sugar molecules and uses the energy and electrons from the ATP and NADPH made in the light reactions. The Calvin cycle takes place in the stroma of the chloroplast. While named "the dark reactions", in most plants, they take place in the light, since the dark reactions are dependent on the products of the light reactions. Carbon fixation and G3P synthesis The Calvin cycle starts by using the enzyme RuBisCO to fix CO2 into five-carbon Ribulose bisphosphate (RuBP) molecules. The result is unstable six-carbon molecules that immediately break down into three-carbon molecules called 3-phosphoglyceric acid, or 3-PGA. The ATP and NADPH made in the light reactions is used to convert the 3-PGA into glyceraldehyde-3-phosphate, or G3P sugar molecules. Most of the G3P molecules are recycled back into RuBP using energy from more ATP, but one out of every six produced leaves the cycle—the end product of the dark reactions. Sugars and starches Glyceraldehyde-3-phosphate can double up to form larger sugar molecules like glucose and fructose. These molecules are processed, and from them, the still larger sucrose, a disaccharide commonly known as table sugar, is made, though this process takes place outside of the chloroplast, in the cytoplasm. Alternatively, glucose monomers in the chloroplast can be linked together to make starch, which accumulates into the starch grains found in the chloroplast. Under conditions such as high atmospheric CO2 concentrations, these starch grains may grow very large, distorting the grana and thylakoids. The starch granules displace the thylakoids, but leave them intact. Waterlogged roots can also cause starch buildup in the chloroplasts, possibly due to less sucrose being exported out of the chloroplast (or more accurately, the plant cell). This depletes a plant's free phosphate supply, which indirectly stimulates chloroplast starch synthesis. While linked to low photosynthesis rates, the starch grains themselves may not necessarily interfere significantly with the efficiency of photosynthesis, and might simply be a side effect of another photosynthesis-depressing factor. Photorespiration Photorespiration can occur when the oxygen concentration is too high. RuBisCO cannot distinguish between oxygen and carbon dioxide very well, so it can accidentally add O2 instead of CO2 to RuBP. This process reduces the efficiency of photosynthesis—it consumes ATP and oxygen, releases CO2, and produces no sugar. It can waste up to half the carbon fixed by the Calvin cycle. Several mechanisms have evolved in different lineages that raise the carbon dioxide concentration relative to oxygen within the chloroplast, increasing the efficiency of photosynthesis. These mechanisms are called carbon dioxide concentrating mechanisms, or CCMs. These include Crassulacean acid metabolism, carbon fixation, and pyrenoids. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism. pH Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8. The optimal stroma pH for the Calvin cycle is 8.1, with the reaction nearly stopping when the pH falls below 7.3. CO2 in water can form carbonic acid, which can disturb the pH of isolated chloroplasts, interfering with photosynthesis, even though CO2 is used in photosynthesis. However, chloroplasts in living plant cells are not affected by this as much. Chloroplasts can pump K+ and H+ ions in and out of themselves using a poorly understood light-driven transport system. In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit. Amino acid synthesis Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Cysteine is made in the chloroplast (the proplastid too) but it is also synthesized in the cytosol and mitochondria, probably because it has trouble crossing membranes to get to where it is needed. The chloroplast is known to make the precursors to methionine but it is unclear whether the organelle carries out the last leg of the pathway or if it happens in the cytosol. Other nitrogen compounds Chloroplasts make all of a cell's purines and pyrimidines—the nitrogenous bases found in DNA and RNA. They also convert nitrite (NO2−) into ammonia (NH3) which supplies the plant with nitrogen to make its amino acids and nucleotides. Other chemical products The plastid is the site of diverse and complex lipid synthesis in plants. The carbon used to form the majority of the lipid is from acetyl-CoA, which is the decarboxylation product of pyruvate. Pyruvate may enter the plastid from the cytosol by passive diffusion through the membrane after production in glycolysis. Pyruvate is also made in the plastid from phosphoenolpyruvate, a metabolite made in the cytosol from pyruvate or PGA. Acetate in the cytosol is unavailable for lipid biosynthesis in the plastid. The typical length of fatty acids produced in the plastid are 16 or 18 carbons, with 0-3 cis double bonds. The biosynthesis of fatty acids from acetyl-CoA primarily requires two enzymes. Acetyl-CoA carboxylase creates malonyl-CoA, used in both the first step and the extension steps of synthesis. Fatty acid synthase (FAS) is a large complex of enzymes and cofactors including acyl carrier protein (ACP) which holds the acyl chain as it is synthesized. The initiation of synthesis begins with the condensation of malonyl-ACP with acetyl-CoA to produce ketobutyryl-ACP. 2 reductions involving the use of NADPH and one dehydration creates butyryl-ACP. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration. Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites. Location Distribution in a plant Not all cells in a multicellular plant contain chloroplasts. All green parts of a plant contain chloroplasts as the color comes from the chlorophyll. The plant cells which contain chloroplasts are usually parenchyma cells, though chloroplasts can also be found in collenchyma tissue. A plant cell which contains chloroplasts is known as a chlorenchyma cell. A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts. In some plants such as cacti, chloroplasts are found in the stems, though in most plants, chloroplasts are concentrated in the leaves. One square millimeter of leaf tissue can contain half a million chloroplasts. Within a leaf, chloroplasts are mainly found in the mesophyll layers of a leaf, and the guard cells of stomata. Palisade mesophyll cells can contain 30–70 chloroplasts per cell, while stomatal guard cells contain only around 8–15 per cell, as well as much less chlorophyll. Chloroplasts can also be found in the bundle sheath cells of a leaf, especially in C plants, which carry out the Calvin cycle in their bundle sheath cells. They are often absent from the epidermis of a leaf. Cellular location Chloroplast movement The chloroplasts of plant and algal cells can orient themselves to best suit the available light. In low-light conditions, they will spread out in a sheet—maximizing the surface area to absorb light. Under intense light, they will seek shelter by aligning in vertical columns along the plant cell's cell wall or turning sideways so that light strikes them edge-on. This reduces exposure and protects them from photooxidative damage. This ability to distribute chloroplasts so that they can take shelter behind each other or spread out may be the reason why land plants evolved to have many small chloroplasts instead of a few big ones. Chloroplast movement is considered one of the most closely regulated stimulus-response systems that can be found in plants. Mitochondria have also been observed to follow chloroplasts as they move. In higher plants, chloroplast movement is run by phototropins, blue light photoreceptors also responsible for plant phototropism. In some algae, mosses, ferns, and flowering plants, chloroplast movement is influenced by red light in addition to blue light, though very long red wavelengths inhibit movement rather than speeding it up. Blue light generally causes chloroplasts to seek shelter, while red light draws them out to maximize light absorption. Studies of Vallisneria gigantea, an aquatic flowering plant, have shown that chloroplasts can get moving within five minutes of light exposure, though they don't initially show any net directionality. They may move along microfilament tracks, and the fact that the microfilament mesh changes shape to form a honeycomb structure surrounding the chloroplasts after they have moved suggests that microfilaments may help to anchor chloroplasts in place. Differentiation, replication, and inheritance Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. There are many other types of plastids, which carry out various functions. All chloroplasts in a plant are descended from undifferentiated proplastids found in the zygote, or fertilized egg. Proplastids are commonly found in an adult plant's apical meristems. Chloroplasts do not normally develop from proplastids in root tip meristems—instead, the formation of starch-storing amyloplasts is more common. In shoots, proplastids from shoot apical meristems can gradually develop into chloroplasts in photosynthetic leaf tissues as the leaf matures, if exposed to the required light. This process involves invaginations of the inner plastid membrane, forming sheets of membrane that project into the internal stroma. These membrane sheets then fold to form thylakoids and grana. If angiosperm shoots are not exposed to the required light for chloroplast formation, proplastids may develop into an etioplast stage before becoming chloroplasts. An etioplast is a plastid that lacks chlorophyll, and has inner membrane invaginations that form a lattice of tubes in their stroma, called a prolamellar body. While etioplasts lack chlorophyll, they have a yellow chlorophyll precursor stocked. Within a few minutes of light exposure, the prolamellar body begins to reorganize into stacks of thylakoids, and chlorophyll starts to be produced. This process, where the etioplast becomes a chloroplast, takes several hours. Gymnosperms do not require light to form chloroplasts. Light, however, does not guarantee that a proplastid will develop into a chloroplast. Whether a proplastid develops into a chloroplast some other kind of plastid is mostly controlled by the nucleus and is largely influenced by the kind of cell it resides in. Plastid interconversion Plastid differentiation is not permanent, in fact many interconversions are possible. Chloroplasts may be converted to chromoplasts, which are pigment-filled plastids responsible for the bright colors seen in flowers and ripe fruit. Starch storing amyloplasts can also be converted to chromoplasts, and it is possible for proplastids to develop straight into chromoplasts. Chromoplasts and amyloplasts can also become chloroplasts, like what happens when a carrot or a potato is illuminated. If a plant is injured, or something else causes a plant cell to revert to a meristematic state, chloroplasts and other plastids can turn back into proplastids. Chloroplast, amyloplast, chromoplast, proplastid are not absolute; state—intermediate forms are common. Division Most chloroplasts in a photosynthetic cell do not develop directly from proplastids or etioplasts. In fact, a typical shoot meristematic plant cell contains only 7–20 proplastids. These proplastids differentiate into chloroplasts, which divide to create the 30–70 chloroplasts found in a mature photosynthetic plant cell. If the cell divides, chloroplast division provides the additional chloroplasts to partition between the two daughter cells. In single-celled algae, chloroplast division is the only way new chloroplasts are formed. There is no proplastid differentiation—when an algal cell divides, its chloroplast divides along with it, and each daughter cell receives a mature chloroplast. Almost all chloroplasts in a cell divide, rather than a small group of rapidly dividing chloroplasts. Chloroplasts have no definite S-phase—their DNA replication is not synchronized or limited to that of their host cells. Much of what we know about chloroplast division comes from studying organisms like Arabidopsis and the red alga Cyanidioschyzon merolæ. The division process starts when the proteins FtsZ1 and FtsZ2 assemble into filaments, and with the help of a protein ARC6, form a structure called a Z-ring within the chloroplast's stroma. The Min system manages the placement of the Z-ring, ensuring that the chloroplast is cleaved more or less evenly. The protein MinD prevents FtsZ from linking up and forming filaments. Another protein ARC3 may also be involved, but it is not very well understood. These proteins are active at the poles of the chloroplast, preventing Z-ring formation there, but near the center of the chloroplast, MinE inhibits them, allowing the Z-ring to form. Next, the two plastid-dividing rings, or PD rings form. The inner plastid-dividing ring is located in the inner side of the chloroplast's inner membrane, and is formed first. The outer plastid-dividing ring is found wrapped around the outer chloroplast membrane. It consists of filaments about 5 nanometers across, arranged in rows 6.4 nanometers apart, and shrinks to squeeze the chloroplast. This is when chloroplast constriction begins. In a few species like Cyanidioschyzon merolæ, chloroplasts have a third plastid-dividing ring located in the chloroplast's intermembrane space. Late into the constriction phase, dynamin proteins assemble around the outer plastid-dividing ring, helping provide force to squeeze the chloroplast. Meanwhile, the Z-ring and the inner plastid-dividing ring break down. During this stage, the many chloroplast DNA plasmids floating around in the stroma are partitioned and distributed to the two forming daughter chloroplasts. Later, the dynamins migrate under the outer plastid dividing ring, into direct contact with the chloroplast's outer membrane, to cleave the chloroplast in two daughter chloroplasts. A remnant of the outer plastid dividing ring remains floating between the two daughter chloroplasts, and a remnant of the dynamin ring remains attached to one of the daughter chloroplasts. Of the five or six rings involved in chloroplast division, only the outer plastid-dividing ring is present for the entire constriction and division phase—while the Z-ring forms first, constriction does not begin until the outer plastid-dividing ring forms. Regulation In species of algae that contain a single chloroplast, regulation of chloroplast division is extremely important to ensure that each daughter cell receives a chloroplast—chloroplasts can't be made from scratch. In organisms like plants, whose cells contain multiple chloroplasts, coordination is looser and less important. It is likely that chloroplast and cell division are somewhat synchronized, though the mechanisms for it are mostly unknown. Light has been shown to be a requirement for chloroplast division. Chloroplasts can grow and progress through some of the constriction stages under poor quality green light, but are slow to complete division—they require exposure to bright white light to complete division. Spinach leaves grown under green light have been observed to contain many large dumbbell-shaped chloroplasts. Exposure to white light can stimulate these chloroplasts to divide and reduce the population of dumbbell-shaped chloroplasts. Chloroplast inheritance Like mitochondria, chloroplasts are usually inherited from a single parent. Biparental chloroplast inheritance—where plastid genes are inherited from both parent plants—occurs in very low levels in some flowering plants. Many mechanisms prevent biparental chloroplast DNA inheritance, including selective destruction of chloroplasts or their genes within the gamete or zygote, and chloroplasts from one parent being excluded from the embryo. Parental chloroplasts can be sorted so that only one type is present in each offspring. Gymnosperms, such as pine trees, mostly pass on chloroplasts paternally, while flowering plants often inherit chloroplasts maternally. Flowering plants were once thought to only inherit chloroplasts maternally. However, there are now many documented cases of angiosperms inheriting chloroplasts paternally. Angiosperms, which pass on chloroplasts maternally, have many ways to prevent paternal inheritance. Most of them produce sperm cells that do not contain any plastids. There are many other documented mechanisms that prevent paternal inheritance in these flowering plants, such as different rates of chloroplast replication within the embryo. Among angiosperms, paternal chloroplast inheritance is observed more often in hybrids than in offspring from parents of the same species. This suggests that incompatible hybrid genes might interfere with the mechanisms that prevent paternal inheritance. Transplastomic plants Recently, chloroplasts have caught attention by developers of genetically modified crops. Since, in most flowering plants, chloroplasts are not inherited from the male parent, transgenes in these plastids cannot be disseminated by pollen. This makes plastid transformation a valuable tool for the creation and cultivation of genetically modified plants that are biologically contained, thus posing significantly lower environmental risks. This biological containment strategy is therefore suitable for establishing the coexistence of conventional and organic agriculture. While the reliability of this mechanism has not yet been studied for all relevant crop species, recent results in tobacco plants are promising, showing a failed containment rate of transplastomic plants at 3 in 1,000,000. Footnotes References External links Chloroplast – Cell Centered Database Co-Extra research on chloroplast transformation NCBI full chloroplast genome Photosynthesis Plastids Endosymbiotic events
Chloroplast
[ "Chemistry", "Biology" ]
20,621
[ "Symbiosis", "Endosymbiotic events", "Photosynthesis", "Plastids", "Biochemistry" ]
6,423
https://en.wikipedia.org/wiki/Calorie
The calorie is a unit of energy that originated from the caloric theory of heat. The large calorie, food calorie, dietary calorie, kilocalorie, or kilogram calorie is defined as the amount of heat needed to raise the temperature of one liter of water by one degree Celsius (or one kelvin). The small calorie or gram calorie is defined as the amount of heat needed to cause the same increase in one milliliter of water. Thus, 1 large calorie is equal to 1,000 small calories. In nutrition and food science, the term calorie and the symbol cal may refer to the large unit or to the small unit in different regions of the world. It is generally used in publications and package labels to express the energy value of foods in per serving or per weight, recommended dietary caloric intake, metabolic rates, etc. Some authors recommend the spelling Calorie and the symbol Cal (both with a capital C) if the large calorie is meant, to avoid confusion; however, this convention is often ignored. In physics and chemistry, the word calorie and its symbol usually refer to the small unit, the large one being called kilocalorie (kcal). However, the kcal is not officially part of the International System of Units (SI), and is regarded as obsolete, having been replaced in many uses by the SI derived unit of energy, the joule (J), or the kilojoule (kJ) for 1000 joules. The precise equivalence between calories and joules has varied over the years, but in thermochemistry and nutrition it is now generally assumed that one (small) calorie (thermochemical calorie) is equal to exactly 4.184 J, and therefore one kilocalorie (one large calorie) is 4184 J or 4.184 kJ. History The term "calorie" comes . It was first introduced by Nicolas Clément, as a unit of heat energy, in lectures on experimental calorimetry during the years 1819–1824. This was the "large" calorie. The term (written with lowercase "c") entered French and English dictionaries between 1841 and 1867. The same term was used for the "small" unit by Pierre Antoine Favre (chemist) and Johann T. Silbermann (physicist) in 1852. In 1879, Marcellin Berthelot distinguished between gram-calorie and kilogram-calorie, and proposed using "Calorie", with capital "C", for the large unit. This usage was adopted by Wilbur Olin Atwater, a professor at Wesleyan University, in 1887, in an influential article on the energy content of food. The smaller unit was used by U.S. physician Joseph Howard Raymond, in his classic 1894 textbook A Manual of Human Physiology. He proposed calling the "large" unit "kilocalorie", but the term did not catch on until some years later. The small calorie (cal) was recognized as a unit of the CGS system in 1896, alongside the already-existing CGS unit of energy, the erg (first suggested by Clausius in 1864, under the name ergon, and officially adopted in 1882). In 1928, there were already serious complaints about the possible confusion arising from the two main definitions of the calorie and whether the notion of using the capital letter to distinguish them was sound. The joule was the officially adopted SI unit of energy at the ninth General Conference on Weights and Measures in 1948. The calorie was mentioned in the 7th edition of the SI brochure as an example of a non-SI unit. The alternate spelling is a less-common, non-standard variant. Definitions The "small" calorie is broadly defined as the amount of energy needed to increase the temperature of 1 gram of water by 1 °C (or 1 K, which is the same increment, a gradation of one percent of the interval between the melting point and the boiling point of water). The actual amount of energy required to accomplish this temperature increase depends on the atmospheric pressure and the starting temperature; different choices of these parameters have resulted in several different precise definitions of the unit. The two definitions most common in older literature appear to be the 15 °C calorie and the thermochemical calorie. Until 1948, the latter was defined as 4.1833 international joules; the current standard of 4.184 J was chosen to have the new thermochemical calorie represent the same quantity of energy as before. Usage Nutrition In the United States, in a nutritional context, the "large" unit is used almost exclusively. It is generally written "calorie" with lowercase "c" and symbol "cal", even in government publications. The SI unit kilojoule (kJ) may be used instead, in legal or scientific contexts. Most American nutritionists prefer the unit kilocalorie to the unit kilojoules, whereas most physiologists prefer to use kilojoules. In the majority of other countries, nutritionists prefer the kilojoule to the kilocalorie. In the European Union, on nutrition facts labels, energy is expressed in both kilojoules and kilocalories, abbreviated as "kJ" and "kcal" respectively. In China, only kilojoules are given. Food energy The unit is most commonly used to express food energy, namely the specific energy (energy per mass) of metabolizing different types of food. For example, fat (triglyceride lipids) contains 9 kilocalories per gram (kcal/g), while carbohydrates (sugar and starch) and protein contain approximately 4 kcal/g. Alcohol in food contains 7 kcal/g. The "large" unit is also used to express recommended nutritional intake or consumption, as in "calories per day". Dieting is the practice of eating food in a regulated way to decrease, maintain, or increase body weight, or to prevent and treat diseases such as diabetes and obesity. As weight loss depends on reducing caloric intake, different kinds of calorie-reduced diets have been shown to be generally effective. Chemistry and physics In other scientific contexts, the term "calorie" and the symbol "cal" almost always refers to the small unit; the "large" unit being generally called "kilocalorie" with symbol "kcal". It is mostly used to express the amount of energy released in a chemical reaction or phase change, typically per mole of substance, as in kilocalories per mole. It is also occasionally used to specify other energy quantities that relate to reaction energy, such as enthalpy of formation and the size of activation barriers. However, it is increasingly being superseded by the SI unit, the joule (J); and metric multiples thereof, such as the kilojoule (kJ). The lingering use in chemistry is largely due to the fact that the energy released by a reaction in aqueous solution, expressed in kilocalories per mole of reagent, is numerically close to the concentration of the reagent in moles per liter multiplied by the change in the temperature of the solution in kelvins or degrees Celsius. However, this estimate assumes that the volumetric heat capacity of the solution is 1 kcal/(L⋅K), which is not exact even for pure water. See also Basal metabolic rate Caloric theory Conversion of units of energy Empty calorie Food energy A calorie is a calorie Nutrition facts label British thermal unit Satiety value References Units of energy Heat transfer Non-SI metric units
Calorie
[ "Physics", "Chemistry", "Mathematics" ]
1,640
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Non-SI metric units", "Quantity", "Units of energy", "Thermodynamics", "Units of measurement" ]
6,445
https://en.wikipedia.org/wiki/Carcinogen
A carcinogen () is any agent that promotes the development of cancer. Carcinogens can include synthetic chemicals, naturally occurring substances, physical agents such as ionizing and non-ionizing radiation, and biologic agents such as viruses and bacteria. Most carcinogens act by creating mutations in DNA that disrupt a cell's normal processes for regulating growth, leading to uncontrolled cellular proliferation. This occurs when the cell's DNA repair processes fail to identify DNA damage allowing the defect to be passed down to daughter cells. The damage accumulates over time. This is typically a multi-step process during which the regulatory mechanisms within the cell are gradually dismantled allowing for unchecked cellular division. The specific mechanisms for carcinogenic activity is unique to each agent and cell type. Carcinogens can be broadly categorized, however, as activation-dependent and activation-independent which relate to the agent's ability to engage directly with DNA. Activation-dependent agents are relatively inert in their original form, but are bioactivated in the body into metabolites or intermediaries capable of damaging human DNA. These are also known as "indirect-acting" carcinogens. Examples of activation-dependent carcinogens include polycyclic aromatic hydrocarbons (PAHs), heterocyclic aromatic amines, and mycotoxins. Activation-independent carcinogens, or "direct-acting" carcinogens, are those that are capable of directly damaging DNA without any modification to their molecular structure. These agents typically include electrophilic groups that react readily with the net negative charge of DNA molecules. Examples of activation-independent carcinogens include ultraviolet light, ionizing radiation and alkylating agents. The time from exposure to a carcinogen to the development of cancer is known as the latency period. For most solid tumors in humans the latency period is between 10 and 40 years depending on cancer type. For blood cancers, the latency period may be as short as two. Due to prolonged latency periods identification of carcinogens can be challenging. A number of organizations review and evaluate the cumulative scientific evidence regarding the potential carcinogenicity of specific substances. Foremost among these is the International Agency for Research on Cancer (IARC). IARC routinely publishes monographs in which specific substances are evaluated for their potential carcinogenicity to humans and subsequently categorized into one of four groupings: Group 1: Carcinogenic to humans, Group 2A: Probably carcinogenic to humans, Group 2B: Possibly carcinogenic to humans and Group 3: Not classifiable as to its carcinogenicity to humans. Other organizations that evaluate the carcinogenicity of substances include the National Toxicology Program of the US Public Health Service, NIOSH, the American Conference of Governmental Industrial Hygienists and others. There are numerous sources of exposures to carcinogens including ultraviolet radiation from the sun, radon gas emitted in residential basements, environmental contaminants such as chlordecone, cigarette smoke and ingestion of some types of foods such as alcohol and processed meats. Occupational exposures represent a major source of carcinogens with an estimated 666,000 annual fatalities worldwide attributable to work related cancers. According to NIOSH, 3-6% of cancers worldwide are due to occupational exposures. Well established occupational carcinogens include vinyl chloride and hemangiosarcoma of the liver, benzene and leukemia, aniline dyes and bladder cancer, asbestos and mesothelioma, polycyclic aromatic hydrocarbons and scrotal cancer among chimney sweeps to name a few. Radiation Ionizing Radiation CERCLA identifies all radionuclides as carcinogens, although the nature of the emitted radiation (alpha, beta, gamma, or neutron and the radioactive strength), its consequent capacity to cause ionization in tissues, and the magnitude of radiation exposure, determine the potential hazard. Carcinogenicity of radiation depends on the type of radiation, type of exposure, and penetration. For example, alpha radiation has low penetration and is not a hazard outside the body, but emitters are carcinogenic when inhaled or ingested. For example, Thorotrast, a (incidentally radioactive) suspension previously used as a contrast medium in x-ray diagnostics, is a potent human carcinogen known because of its retention within various organs and persistent emission of alpha particles. Low-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer. Non-ionizing radiation Not all types of electromagnetic radiation are carcinogenic. Low-energy waves on the electromagnetic spectrum including radio waves, microwaves, infrared radiation and visible light are thought not to be, because they have insufficient energy to break chemical bonds. Evidence for carcinogenic effects of non-ionizing radiation is generally inconclusive, though there are some documented cases of radar technicians with prolonged high exposure experiencing significantly higher cancer incidence. Higher-energy radiation, including ultraviolet radiation (present in sunlight) generally is carcinogenic, if received in sufficient doses. For most people, ultraviolet radiations from sunlight is the most common cause of skin cancer. In Australia, where people with pale skin are often exposed to strong sunlight, melanoma is the most common cancer diagnosed in people aged 15–44 years. Substances or foods irradiated with electrons or electromagnetic radiation (such as microwave, X-ray or gamma) are not carcinogenic. In contrast, non-electromagnetic neutron radiation produced inside nuclear reactors can produce secondary radiation through nuclear transmutation. Common carcinogens associated with food Alcohol Alcohol is a carcinogen of the head and neck, esophagus, liver, colon and rectum, and breast. It has a synergistic effect with tobacco smoke in the development of head and neck cancers. In the United States approximately 6% of cancers and 4% of cancer deaths are attributable to alcohol use. Processed meats Chemicals used in processed and cured meat such as some brands of bacon, sausages and ham may produce carcinogens. For example, nitrites used as food preservatives in cured meat such as bacon have also been noted as being carcinogenic with demographic links, but not causation, to colon cancer. Meats cooked at high temperatures Cooking food at high temperatures, for example grilling or barbecuing meats, may also lead to the formation of minute quantities of many potent carcinogens that are comparable to those found in cigarette smoke (i.e., benzo[a]pyrene). Charring of food looks like coking and tobacco pyrolysis, and produces carcinogens. There are several carcinogenic pyrolysis products, such as polynuclear aromatic hydrocarbons, which are converted by human enzymes into epoxides, which attach permanently to DNA. Pre-cooking meats in a microwave oven for 2–3 minutes before grilling shortens the time on the hot pan, and removes heterocyclic amine (HCA) precursors, which can help minimize the formation of these carcinogens. Acrylamide in foods Frying, grilling or broiling food at high temperatures, especially starchy foods, until a toasted crust is formed generates acrylamides. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth". Biologic Agents Several biologic agents are known carcinogens. Aflatoxin B1, a toxin produced by the fungus Aspergillus flavus which is a common contaminant of stored grains and nuts is a known cause of hepatocellular cancer. The bacteria H. Pylori is known to cause stomach cancer and MALT lymphoma. Hepatitis B and C are associated with the development of hepatocellular cancer. HPV is the primary cause of cervical cancer. Cigarette smoke Tobacco smoke contains at least 70 known carcinogens and is implicated in the development of numerous types of cancers including cancers of the lung, larynx, esophagus, stomach, kidney, pancreas, liver, bladder, cervix, colon, rectum and blood. Potent carcinogens found in cigarette smoke include polycyclic aromatic hydrocarbons (PAH, such as benzo(a)pyrene), benzene, and nitrosamine. Occupational carcinogens Given that populations of workers are more likely to have consistent, often high level exposures to chemicals rarely encountered in normal life, much of the evidence for the carcinogenicity of specific agents is derived from studies of workers. Selected carcinogens Others Gasoline (contains aromatics) Lead and its compounds Alkylating antineoplastic agents (e.g., mechlorethamine) Styrene Other alkylating agents (e.g., dimethyl sulfate) Ultraviolet radiation from the sun and UV lamps Other ionizing radiation (X-rays, gamma rays, etc.) Low refining or unrefined mineral oils Mechanisms of carcinogenicity Carcinogens can be classified as genotoxic or nongenotoxic. Genotoxins cause irreversible genetic damage or mutations by binding to DNA. Genotoxins include chemical agents like N-nitroso-N-methylurea (NMU) or non-chemical agents such as ultraviolet light and ionizing radiation. Certain viruses can also act as carcinogens by interacting with DNA. Nongenotoxins do not directly affect DNA but act in other ways to promote growth. These include hormones and some organic compounds. Classification International Agency for Research on Cancer The International Agency for Research on Cancer (IARC) is an intergovernmental agency established in 1965, which forms part of the World Health Organization of the United Nations. It is based in Lyon, France. Since 1971 it has published a series of Monographs on the Evaluation of Carcinogenic Risks to Humans that have been highly influential in the classification of possible carcinogens. Group 1: the agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans. Group 2A: the agent (mixture) is most likely (product more likely to be) carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans. Group 2B: the agent (mixture) is possibly (chance of product being) carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans. Group 3: the agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans. Group 4: the agent (mixture) is most likely not carcinogenic to humans. Globally Harmonized System The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is a United Nations initiative to attempt to harmonize the different systems of assessing chemical risk which currently exist (as of March 2009) around the world. It classifies carcinogens into two categories, of which the first may be divided again into subcategories if so desired by the competent regulatory authority: Category 1: known or presumed to have carcinogenic potential for humans Category 1A: the assessment is based primarily on human evidence Category 1B: the assessment is based primarily on animal evidence Category 2: suspected human carcinogens U.S. National Toxicology Program The National Toxicology Program of the U.S. Department of Health and Human Services is mandated to produce a biennial Report on Carcinogens. As of August 2024, the latest edition was the 15th report (2021). It classifies carcinogens into two groups: Known to be a human carcinogen Reasonably anticipated to be a human carcinogen American Conference of Governmental Industrial Hygienists The American Conference of Governmental Industrial Hygienists (ACGIH) is a private organization best known for its publication of threshold limit values (TLVs) for occupational exposure and monographs on workplace chemical hazards. It assesses carcinogenicity as part of a wider assessment of the occupational hazards of chemicals. Group A1: Confirmed human carcinogen Group A2: Suspected human carcinogen Group A3: Confirmed animal carcinogen with unknown relevance to humans Group A4: Not classifiable as a human carcinogen Group A5: Not suspected as a human carcinogen European Union The European Union classification of carcinogens is contained in the Regulation (EC) No 1272/2008. It consists of three categories: Category 1A: Carcinogenic Category 1B: May cause cancer Category 2: Suspected of causing cancer The former European Union classification of carcinogens was contained in the Dangerous Substances Directive and the Dangerous Preparations Directive. It also consisted of three categories: Category 1: Substances known to be carcinogenic to humans. Category 2: Substances which should be regarded as if they are carcinogenic to humans. Category 3: Substances which cause concern for humans, owing to possible carcinogenic effects but in respect of which the available information is not adequate for making a satisfactory assessment. This assessment scheme is being phased out in favor of the GHS scheme (see above), to which it is very close in category definitions. Safe Work Australia Under a previous name, the NOHSC, in 1999 Safe Work Australia published the Approved Criteria for Classifying Hazardous Substances [NOHSC:1008(1999)]. Section 4.76 of this document outlines the criteria for classifying carcinogens as approved by the Australian government. This classification consists of three categories: Category 1: Substances known to be carcinogenic to humans. Category 2: Substances that should be regarded as if they were carcinogenic to humans. Category 3: Substances that have possible carcinogenic effects in humans but about which there is insufficient information to make an assessment. Major carcinogens implicated in the four most common cancers worldwide In this section, the carcinogens implicated as the main causative agents of the four most common cancers worldwide are briefly described. These four cancers are lung, breast, colon, and stomach cancers. Together they account for about 41% of worldwide cancer incidence and 42% of cancer deaths (for more detailed information on the carcinogens implicated in these and other cancers, see references). Lung cancer Lung cancer (pulmonary carcinoma) is the most common cancer in the world, both in terms of cases (1.6 million cases; 12.7% of total cancer cases) and deaths (1.4 million deaths; 18.2% of total cancer deaths). Lung cancer is largely caused by tobacco smoke. Risk estimates for lung cancer in the United States indicate that tobacco smoke is responsible for 90% of lung cancers. Other factors are implicated in lung cancer, and these factors can interact synergistically with smoking so that total attributable risk adds up to more than 100%. These factors include occupational exposure to carcinogens (about 9-15%), radon (10%) and outdoor air pollution (1-2%). Tobacco smoke is a complex mixture of more than 5,300 identified chemicals. The most important carcinogens in tobacco smoke have been determined by a "Margin of Exposure" approach. Using this approach, the most important tumorigenic compounds in tobacco smoke were, in order of importance, acrolein, formaldehyde, acrylonitrile, 1,3-butadiene, cadmium, acetaldehyde, ethylene oxide, and isoprene. Most of these compounds cause DNA damage by forming DNA adducts or by inducing other alterations in DNA. DNA damages are subject to error-prone DNA repair or can cause replication errors. Such errors in repair or replication can result in mutations in tumor suppressor genes or oncogenes leading to cancer. Breast cancer Breast cancer is the second most common cancer [(1.4 million cases, 10.9%), but ranks 5th as cause of death (458,000, 6.1%)]. Increased risk of breast cancer is associated with persistently elevated blood levels of estrogen. Estrogen appears to contribute to breast carcinogenesis by three processes; (1) the metabolism of estrogen to genotoxic, mutagenic carcinogens, (2) the stimulation of tissue growth, and (3) the repression of phase II detoxification enzymes that metabolize ROS leading to increased oxidative DNA damage. The major estrogen in humans, estradiol, can be metabolized to quinone derivatives that form adducts with DNA. These derivatives can cause depurination, the removal of bases from the phosphodiester backbone of DNA, followed by inaccurate repair or replication of the apurinic site leading to mutation and eventually cancer. This genotoxic mechanism may interact in synergy with estrogen receptor-mediated, persistent cell proliferation to ultimately cause breast cancer. Genetic background, dietary practices and environmental factors also likely contribute to the incidence of DNA damage and breast cancer risk. Consumption of alcohol has also been linked to an increased risk for breast cancer. Colon cancer Colorectal cancer is the third most common cancer [1.2 million cases (9.4%), 608,000 deaths (8.0%)]. Tobacco smoke may be responsible for up to 20% of colorectal cancers in the United States. In addition, substantial evidence implicates bile acids as an important factor in colon cancer. Twelve studies (summarized in Bernstein et al.) indicate that the bile acids deoxycholic acid (DCA) or lithocholic acid (LCA) induce production of DNA-damaging reactive oxygen species or reactive nitrogen species in human or animal colon cells. Furthermore, 14 studies showed that DCA and LCA induce DNA damage in colon cells. Also 27 studies reported that bile acids cause programmed cell death (apoptosis). Increased apoptosis can result in selective survival of cells that are resistant to induction of apoptosis. Colon cells with reduced ability to undergo apoptosis in response to DNA damage would tend to accumulate mutations, and such cells may give rise to colon cancer. Epidemiologic studies have found that fecal bile acid concentrations are increased in populations with a high incidence of colon cancer. Dietary increases in total fat or saturated fat result in elevated DCA and LCA in feces and elevated exposure of the colon epithelium to these bile acids. When the bile acid DCA was added to the standard diet of wild-type mice invasive colon cancer was induced in 56% of the mice after 8 to 10 months. Overall, the available evidence indicates that DCA and LCA are centrally important DNA-damaging carcinogens in colon cancer. Stomach cancer Stomach cancer is the fourth most common cancer [990,000 cases (7.8%), 738,000 deaths (9.7%)]. Helicobacter pylori infection is the main causative factor in stomach cancer. Chronic gastritis (inflammation) caused by H. pylori is often long-standing if not treated. Infection of gastric epithelial cells with H. pylori results in increased production of reactive oxygen species (ROS). ROS cause oxidative DNA damage including the major base alteration 8-hydroxydeoxyguanosine (8-OHdG). 8-OHdG resulting from ROS is increased in chronic gastritis. The altered DNA base can cause errors during DNA replication that have mutagenic and carcinogenic potential. Thus H. pylori-induced ROS appear to be the major carcinogens in stomach cancer because they cause oxidative DNA damage leading to carcinogenic mutations. Diet is also thought to be a contributing factor in stomach cancer: in Japan, where very salty pickled foods are popular, the incidence of stomach cancer is high. Preserved meat such as bacon, sausages, and ham increases the risk, while a diet rich in fresh fruit, vegetables, peas, beans, grains, nuts, seeds, herbs, and spices will reduce the risk. The risk also increases with age. See also References External links ; Radiation health effects Occupational safety and health
Carcinogen
[ "Chemistry", "Materials_science", "Environmental_science" ]
4,250
[ "Radiation health effects", "Toxicology", "Radiation effects", "Carcinogens", "Radioactivity" ]
6,513
https://en.wikipedia.org/wiki/Client%E2%80%93server%20model
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may be on the same device. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web. Client and server role The server component provides a function or service to one or many clients, which initiate requests for such services. Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a service. Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication. Client and server communication Generally, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the relevant application protocol, i.e. the content and the formatting of the data for the requested service. Clients and servers exchange messages in a request–response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange. A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates. Encryption should be applied if sensitive information is to be communicated between the client and the server. Example When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials may be stored in a database, and the webserver accesses the database server as a client. An application server interprets the returned data by applying the bank's business logic and provides the output to the webserver. Finally, the webserver returns the result to the client web browser for display. In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer. This example illustrates a design pattern applicable to the client–server model: separation of concerns. Server-side Server-side refers to programs and operations that run on the server. This is in contrast to client-side programs and operations which run on the client. (See below) General concepts "Server-side software" refers to a computer application, such as a web server, that runs on remote server hardware, reachable from a user's local computer, smartphone, or other device. Operations may be performed server-side because they require access to information or functionality that is not available on the client, or because performing such operations on the client side would be slow, unreliable, or insecure. Client and server programs may be commonly available ones such as free or commercial web servers and web browsers, communicating with each other using standardized protocols. Or, programmers may write their own server, client, and communications protocol which can only be used with one another. Server-side operations include both those that are carried out in response to client requests, and non-client-oriented operations such as maintenance tasks. Computer security In a computer security context, server-side vulnerabilities or attacks refer to those that occur on a server computer system, rather than on the client side, or in between the two. For example, an attacker might exploit an SQL injection vulnerability in a web application in order to maliciously change or gain unauthorized access to data in the server's database. Alternatively, an attacker might break into a server system using vulnerabilities in the underlying operating system and then be able to access database and other files in the same manner as authorized administrators of the server. Examples In the case of distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, while the bulk of the operations occur on the client side, the servers are responsible for coordinating the clients, sending them data to analyze, receiving and storing results, providing reporting functionality to project administrators, etc. In the case of an Internet-dependent user application like Google Earth, while querying and display of map data takes place on the client side, the server is responsible for permanent storage of map data, resolving user queries into map data to be returned to the client, etc. In the context of the World Wide Web, commonly encountered server-side computer languages include: C# or Visual Basic in ASP.NET environments Java Perl PHP Python Ruby Node.js Swift However, web applications and services can be implemented in almost any language, as long as they can return data to standards-based web browsers (possibly via intermediary programs) in formats which they can use. Client side Client-side refers to operations that are performed by the client in a computer network. General concepts Typically, a client is a computer application, such as a web browser, that runs on a user's local computer, smartphone, or other device, and connects to a server as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use less bandwidth, and incur a lesser security risk. When the server serves data in a commonly used manner, for example according to standard protocols such as HTTP or FTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications, programmers may write their own server, client, and communications protocol which can only be used with one another. Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations. Computer security In a computer security context, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on the server side, or in between the two. As an example, if a server contained an encrypted file or message which could only be decrypted using a key housed on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might cause malware to be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employ cross-site scripting vulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware. Examples Distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, as well as Internet-dependent applications like Google Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (a server-side operation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home). In the context of the World Wide Web, commonly encountered computer languages which are evaluated or run on the client side include: Cascading Style Sheets (CSS) HTML JavaScript Early history An early form of client–server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output. While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and user-host (or using-host), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s. One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet). Client-host and server-host Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and client-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving. An early use of the word client occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client). By 1992, the word server had entered into general parlance. Centralized computing The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a rich client, such as a personal computer, has many resources and does not rely on a server for essential functions. As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to rich clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s. Comparison with peer-to-peer architecture In addition to the client-server model, distributed computing applications often use the peer-to-peer (P2P) application architecture. In the client-server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine. Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them. In a peer-to-peer network, two or more computers (peers) pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client-server or client-queue-client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests. Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems. See also Notes Servers (computing) Clients (computing) Inter-process communication Network architecture
Client–server model
[ "Engineering" ]
3,143
[ "Network architecture", "Computer networks engineering" ]
6,556
https://en.wikipedia.org/wiki/Coprime%20integers
In number theory, two integers and are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides does not divide , and vice versa. This is equivalent to their greatest common divisor (GCD) being 1. One says also is prime to or is coprime with . The numbers 8 and 9 are coprime, despite the fact that neither—considered individually—is a prime number, since 1 is their only common divisor. On the other hand, 6 and 9 are not coprime, because they are both divisible by 3. The numerator and denominator of a reduced fraction are coprime, by definition. Notation and testing When the integers and are coprime, the standard way of expressing this fact in mathematical notation is to indicate that their greatest common divisor is one, by the formula or . In their 1989 textbook Concrete Mathematics, Ronald Graham, Donald Knuth, and Oren Patashnik proposed an alternative notation to indicate that and are relatively prime and that the term "prime" be used instead of coprime (as in is prime to ). A fast way to determine whether two numbers are coprime is given by the Euclidean algorithm and its faster variants such as binary GCD algorithm or Lehmer's GCD algorithm. The number of integers coprime with a positive integer , between 1 and , is given by Euler's totient function, also known as Euler's phi function, . A set of integers can also be called coprime if its elements share no common positive factor except 1. A stronger condition on a set of integers is pairwise coprime, which means that and are coprime for every pair of different integers in the set. The set is coprime, but it is not pairwise coprime since 2 and 4 are not relatively prime. Properties The numbers 1 and −1 are the only integers coprime with every integer, and they are the only integers that are coprime with 0. A number of conditions are equivalent to and being coprime: No prime number divides both and . There exist integers such that (see Bézout's identity). The integer has a multiplicative inverse modulo , meaning that there exists an integer such that . In ring-theoretic language, is a unit in the ring of integers modulo . Every pair of congruence relations for an unknown integer , of the form and , has a solution (Chinese remainder theorem); in fact the solutions are described by a single congruence relation modulo . The least common multiple of and is equal to their product , i.e. . As a consequence of the third point, if and are coprime and , then . That is, we may "divide by " when working modulo . Furthermore, if are both coprime with , then so is their product (i.e., modulo it is a product of invertible elements, and therefore invertible); this also follows from the first point by Euclid's lemma, which states that if a prime number divides a product , then divides at least one of the factors . As a consequence of the first point, if and are coprime, then so are any powers and . If and are coprime and divides the product , then divides . This can be viewed as a generalization of Euclid's lemma. The two integers and are coprime if and only if the point with coordinates in a Cartesian coordinate system would be "visible" via an unobstructed line of sight from the origin , in the sense that there is no point with integer coordinates anywhere on the line segment between the origin and . (See figure 1.) In a sense that can be made precise, the probability that two randomly chosen integers are coprime is , which is about 61% (see , below). Two natural numbers and are coprime if and only if the numbers and are coprime. As a generalization of this, following easily from the Euclidean algorithm in base : Coprimality in sets A set of integers can also be called coprime or setwise coprime if the greatest common divisor of all the elements of the set is 1. For example, the integers 6, 10, 15 are coprime because 1 is the only positive integer that divides all of them. If every pair in a set of integers is coprime, then the set is said to be pairwise coprime (or pairwise relatively prime, mutually coprime or mutually relatively prime). Pairwise coprimality is a stronger condition than setwise coprimality; every pairwise coprime finite set is also setwise coprime, but the reverse is not true. For example, the integers 4, 5, 6 are (setwise) coprime (because the only positive integer dividing all of them is 1), but they are not pairwise coprime (because ). The concept of pairwise coprimality is important as a hypothesis in many results in number theory, such as the Chinese remainder theorem. It is possible for an infinite set of integers to be pairwise coprime. Notable examples include the set of all prime numbers, the set of elements in Sylvester's sequence, and the set of all Fermat numbers. Coprimality in ring ideals Two ideals and in a commutative ring are called coprime (or comaximal) if This generalizes Bézout's identity: with this definition, two principal ideals () and () in the ring of integers are coprime if and only if and are coprime. If the ideals and of are coprime, then furthermore, if is a third ideal such that contains , then contains . The Chinese remainder theorem can be generalized to any commutative ring, using coprime ideals. Probability of coprimality Given two randomly chosen integers and , it is reasonable to ask how likely it is that and are coprime. In this determination, it is convenient to use the characterization that and are coprime if and only if no prime number divides both of them (see Fundamental theorem of arithmetic). Informally, the probability that any number is divisible by a prime (or in fact any integer) is for example, every 7th integer is divisible by 7. Hence the probability that two numbers are both divisible by is and the probability that at least one of them is not is Any finite collection of divisibility events associated to distinct primes is mutually independent. For example, in the case of two events, a number is divisible by primes and if and only if it is divisible by ; the latter event has probability If one makes the heuristic assumption that such reasoning can be extended to infinitely many divisibility events, one is led to guess that the probability that two numbers are coprime is given by a product over all primes, Here refers to the Riemann zeta function, the identity relating the product over primes to is an example of an Euler product, and the evaluation of as is the Basel problem, solved by Leonhard Euler in 1735. There is no way to choose a positive integer at random so that each positive integer occurs with equal probability, but statements about "randomly chosen integers" such as the ones above can be formalized by using the notion of natural density. For each positive integer , let be the probability that two randomly chosen numbers in are coprime. Although will never equal exactly, with work one can show that in the limit as the probability approaches . More generally, the probability of randomly chosen integers being setwise coprime is Generating all coprime pairs All pairs of positive coprime numbers (with ) can be arranged in two disjoint complete ternary trees, one tree starting from (for even–odd and odd–even pairs), and the other tree starting from (for odd–odd pairs). The children of each vertex are generated as follows: Branch 1: Branch 2: Branch 3: This scheme is exhaustive and non-redundant with no invalid members. This can be proved by remarking that, if is a coprime pair with then if then is a child of along branch 3; if then is a child of along branch 2; if then is a child of along branch 1. In all cases is a "smaller" coprime pair with This process of "computing the father" can stop only if either or In these cases, coprimality, implies that the pair is either or Another (much simpler) way to generate a tree of positive coprime pairs (with ) is by means of two generators and , starting with the root . The resulting binary tree, the Calkin–Wilf tree, is exhaustive and non-redundant, which can be seen as follows. Given a coprime pair one recursively applies or depending on which of them yields a positive coprime pair with . Since only one does, the tree is non-redundant. Since by this procedure one is bound to arrive at the root, the tree is exhaustive. Applications In machine design, an even, uniform gear wear is achieved by choosing the tooth counts of the two gears meshing together to be relatively prime. When a 1:1 gear ratio is desired, a gear relatively prime to the two equal-size gears may be inserted between them. In pre-computer cryptography, some Vernam cipher machines combined several loops of key tape of different lengths. Many rotor machines combine rotors of different numbers of teeth. Such combinations work best when the entire set of lengths are pairwise coprime. Generalizations This concept can be extended to other algebraic structures than for example, polynomials whose greatest common divisor is 1 are called coprime polynomials. See also Euclid's orchard Superpartient number Notes References Further reading . Number theory
Coprime integers
[ "Mathematics" ]
2,084
[ "Discrete mathematics", "Number theory" ]
6,596
https://en.wikipedia.org/wiki/Computer%20vision
Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions. "Understanding" in this context signifies the transformation of visual images (the input to the retina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. The scientific discipline of computer vision is concerned with the theory behind artificial systems that extract information from images. Image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, 3D point clouds from LiDaR sensors, or medical scanning devices. The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems. Subdisciplines of computer vision include scene reconstruction, object detection, event detection, activity recognition, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, visual servoing, 3D scene modeling, and image restoration. Definition Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. "Computer vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding." As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems. Machine vision refers to a systems engineering discipline, especially in the context of factory automation. In more recent times, the terms computer vision and machine vision have converged to a greater degree. History In the late 1960s, computer vision began at universities that were pioneering artificial intelligence. It was meant to mimic the human visual system as a stepping stone to endowing robots with intelligent behavior. In 1966, it was believed that this could be achieved through an undergraduate summer project, by attaching a camera to a computer and having it "describe what it saw". What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation. The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields. By the 1990s, some of the previous research topics became more active than others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering. Recent work has seen the resurgence of feature-based methods used in conjunction with machine learning techniques and complex optimization frameworks. The advancement of Deep Learning techniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification, segmentation and optical flow has surpassed prior methods. Related fields Solid-state physics Solid-state physics is another field that is closely related to computer vision. Most computer vision systems rely on image sensors, which detect electromagnetic radiation, which is typically in the form of either visible, infrared or ultraviolet light. The sensors are designed using quantum physics. The process by which light interacts with surfaces is explained using physics. Physics explains the behavior of optics which are a core part of most imaging systems. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Also, various measurement problems in physics can be addressed using computer vision, for example, motion in fluids. Neurobiology Neurobiology has greatly influenced the development of computer vision algorithms. Over the last century, there has been an extensive study of eyes, neurons, and brain structures devoted to the processing of visual stimuli in both humans and various animals. This has led to a coarse yet convoluted description of how natural vision systems operate in order to solve certain vision-related tasks. These results have led to a sub-field within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems at different levels of complexity. Also, some of the learning-based methods developed within computer vision (e.g. neural net and deep learning based image and feature analysis and classification) have their background in neurobiology. The Neocognitron, a neural network developed in the 1970s by Kunihiko Fukushima, is an early example of computer vision taking direct inspiration from neurobiology, specifically the primary visual cortex. Some strands of computer vision research are closely related to the study of biological vision—indeed, just as many strands of AI research are closely tied with research into human intelligence and the use of stored knowledge to interpret, integrate, and utilize visual information. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, develops and describes the algorithms implemented in software and hardware behind artificial vision systems. An interdisciplinary exchange between biological and computer vision has proven fruitful for both fields. Signal processing Yet another field related to computer vision is signal processing. Many methods for processing one-variable signals, typically temporal signals, can be extended in a natural way to the processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images, there are many methods developed within computer vision that have no counterpart in the processing of one-variable signals. Together with the multi-dimensionality of the signal, this defines a subfield in signal processing as a part of computer vision. Robotic navigation Robot navigation sometimes deals with autonomous path planning or deliberation for robotic systems to navigate through an environment. A detailed understanding of these environments is required to navigate through them. Information about the environment could be provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot Visual computing Other fields Besides the above-mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance. Computer vision is also used in fashion eCommerce, inventory management, patent search, furniture, and the beauty industry. Distinctions The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences, and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented. In image processing, the input is an image and the output is an image as well, whereas in computer vision, an image or a video is taken as an input and the output could be an enhanced image, an understanding of the content of an image or even behavior of a computer system based on such understanding. Computer graphics produces image data from 3D models, and computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality. The following characterizations appear relevant but should not be taken as universally accepted: Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither requires assumptions nor produces interpretations about the image content. Computer vision includes 3D analysis from 2D images. This analyzes the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image. Machine vision is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control, and robot guidance in industrial applications. Machine vision tends to focus on applications, mainly in manufacturing, e.g., vision-based robots and systems for vision-based inspection, measurement, or picking (such as bin picking). This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms. There is also a field called imaging which primarily focuses on the process of producing images, but sometimes also deals with the processing and analysis of images. For example, medical imaging includes substantial work on the analysis of image data in medical applications. Progress in convolutional neural networks (CNNs) has improved the accurate detection of disease in medical images, particularly in cardiology, pathology, dermatology, and radiology. Finally, pattern recognition is a field that uses various methods to extract information from signals in general, mainly based on statistical approaches and artificial neural networks. A significant part of this field is devoted to applying these methods to image data. Photogrammetry also overlaps with computer vision, e.g., stereophotogrammetry vs. computer stereo vision. Applications Applications range from tasks such as industrial machine vision systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for: Automatic inspection, e.g., in manufacturing applications; Assisting humans in identification tasks, e.g., a species identification system; Controlling processes, e.g., an industrial robot; Detecting events, e.g., for visual surveillance or people counting, e.g., in the restaurant industry; Interaction, e.g., as the input to a device for computer-human interaction; monitoring agricultural crops, e.g. an open-source vision transformers model has been developed to help farmers automatically detect strawberry diseases with 98.4% accuracy. Modeling objects or environments, e.g., medical image analysis or topographical modeling; Navigation, e.g., by an autonomous vehicle or mobile robot; Organizing information, e.g., for indexing databases of images and image sequences. Tracking surfaces or planes in 3D coordinates for allowing Augmented Reality experiences. Medicine One of the most prominent application fields is medical computer vision, or medical image processing, characterized by the extraction of information from image data to diagnose a patient. An example of this is the detection of tumours, arteriosclerosis or other malign changes, and a variety of dental pathologies; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information: e.g., about the structure of the brain or the quality of medical treatments. Applications of computer vision in the medical area also include enhancement of images interpreted by humans—ultrasonic images or X-ray images, for example—to reduce the influence of noise. Machine vision A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a production process. One example is quality control where details or final products are being automatically inspected in order to find defects. One of the most prevalent fields for such inspection is the Wafer industry in which every single Wafer is being measured and inspected for inaccuracies or defects to prevent a computer chip from coming to market in an unusable manner. Another example is a measurement of the position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in the agricultural processes to remove undesirable foodstuff from bulk material, a process called optical sorting. Military Military applications are probably one of the largest areas of computer vision. The obvious examples are the detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability. Autonomous vehicles One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars, or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g., for knowing where they are or mapping their environment (SLAM), for detecting obstacles. It can also be used for detecting certain task-specific events, e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, cameras and LiDAR sensors in vehicles, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g., NASA's Curiosity and CNSA's Yutu-2 rover. Tactile feedback Materials such as rubber and silicon are being used to create sensors that allow for applications such as detecting microundulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins are being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data on imperfections on a very large surface. Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data. Other application areas include: Support of visual effects creation for cinema and broadcast, e.g., camera tracking (match moving). Surveillance. Driver drowsiness detection Tracking and counting organisms in the biological sciences Typical tasks Each of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Recognition The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of recognition problem are described in the literature. Object recognition (also called object classification)one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Blippar, Google Goggles, and LikeThat provide stand-alone programs that illustrate this functionality. Identificationan individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or the identification of a specific vehicle. Detectionthe image data are scanned for specific objects along with their locations. Examples include the detection of an obstacle in the car's field of view and possible abnormal cells or tissues in medical images or the detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation. Currently, the best algorithms for such tasks are based on convolutional neural networks. An illustration of their capabilities is given by the ImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition. Performance of convolutional neural networks on the ImageNet tests is now close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on the stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease. Several specialized tasks based on recognition exist, such as: Content-based image retrievalfinding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative to a target image (give me all images similar to image X) by utilizing reverse image search techniques, or in terms of high-level search criteria given as text input (give me all images which contain many houses, are taken during winter and have no cars in them). Pose estimationestimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation or picking parts from a bin. Optical character recognition (OCR)identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII). A related task is reading of 2D codes such as data matrix and QR codes. Facial recognition a technology that enables the matching of faces in digital images or video frames to a face database, which is now widely used for mobile phone facelock, smart door locking, etc. Emotion recognitiona subset of facial recognition, emotion recognition refers to the process of classifying human emotions. Psychologists caution, however, that internal emotions cannot be reliably detected from faces. Shape Recognition Technology (SRT) in people counter systems differentiating human beings (head and shoulder patterns) from objects. Human activity recognition - deals with recognizing the activity from a series of video frames, such as, if the person is picking up an object or walking. Motion analysis Several tasks relate to motion estimation, where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene or even of the camera that produces the images. Examples of such tasks are: Egomotiondetermining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera. Trackingfollowing the movements of a (usually) smaller set of interest points or objects (e.g., vehicles, objects, humans or other organisms) in the image sequence. This has vast industry applications as most high-running machinery can be monitored in this way. Optical flowto determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result of both how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene. Scene reconstruction Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case, the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model. The advent of 3D imaging not requiring motion or scanning, and related processing algorithms is enabling rapid advances in this field. Grid-based 3D sensing can be used to acquire 3D images from multiple angles. Algorithms are now available to stitch multiple 3D images together into point clouds and 3D models. Image restoration Image restoration comes into the picture when the original image is degraded or damaged due to some external factors like lens wrong positioning, transmission interference, low lighting or motion blurs, etc., which is referred to as noise. When the images are degraded or damaged, the information to be extracted from them also gets damaged. Therefore, we need to recover or restore the image as it was intended to be. The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters, such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look to distinguish them from noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches. An example in this field is inpainting. System methods The organization of a computer vision system is highly application-dependent. Some systems are stand-alone applications that solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on whether its functionality is pre-specified or if some part of it can be learned or modified during operation. Many functions are unique to the application. There are, however, typical functions that are found in many computer vision systems. Image acquisition – A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images or colour images) but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or magnetic resonance imaging. Pre-processing – Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to ensure that it satisfies certain assumptions implied by the method. Examples are: Re-sampling to ensure that the image coordinate system is correct. Noise reduction to ensure that sensor noise does not introduce false information. Contrast enhancement to ensure that relevant information can be detected. Scale space representation to enhance image structures at locally appropriate scales. Feature extraction – Image features at various levels of complexity are extracted from the image data. Typical examples of such features are: Lines, edges and ridges. Localized interest points such as corners, blobs or points. More complex features may be related to texture, shape, or motion. Detection/segmentation – At some point in the processing, a decision is made about which image points or regions of the image are relevant for further processing. Examples are: Selection of a specific set of interest points. Segmentation of one or multiple image regions that contain a specific object of interest. Segmentation of image into nested scene architecture comprising foreground, object groups, single objects or salient object parts (also referred to as spatial-taxon scene hierarchy), while the visual salience is often implemented as spatial and temporal attention. Segmentation or co-segmentation of one or multiple videos into a series of per-frame foreground masks while maintaining its temporal semantic continuity. High-level processing – At this step, the input is typically a small set of data, for example, a set of points or an image region, which is assumed to contain a specific object. The remaining processing deals with, for example: Verification that the data satisfies model-based and application-specific assumptions. Estimation of application-specific parameters, such as object pose or object size. Image recognition – classifying a detected object into different categories. Image registration – comparing and combining two different views of the same object. Decision making Making the final decision required for the application, for example: Pass/fail on automatic inspection applications. Match/no-match in recognition applications. Flag for further human review in medical, military, security and recognition applications. Image-understanding systems Image-understanding systems (IUS) include three levels of abstraction as follows: low level includes image primitives such as edges, texture elements, or regions; intermediate level includes boundaries, surfaces and volumes; and high level includes objects, scenes, or events. Many of these requirements are entirely topics for further research. The representational requirements in the designing of IUS for these levels are: representation of prototypical concepts, concept organization, spatial knowledge, temporal knowledge, scaling, and description by comparison and differentiation. While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing. Inference and control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations, change and focus of attention, certainty and strength of belief, inference and goal satisfaction. Hardware There are many kinds of computer vision systems; however, all of them contain these basic elements: a power source, at least one image acquisition device (camera, ccd, etc.), a processor, and control and communication cables or some kind of wireless interconnection mechanism. In addition, a practical vision system contains software, as well as a display in order to monitor the system. Vision systems for inner spaces, as most industrial ones, contain an illumination system and may be placed in a controlled environment. Furthermore, a completed system includes many accessories, such as camera supports, cables, and connectors. Most computer vision systems use visible-light cameras passively viewing a scene at frame rates of at most 60 frames per second (usually far slower). A few computer vision systems use image-acquisition hardware with active illumination or something other than visible light or both, such as structured-light 3D scanners, thermographic cameras, hyperspectral imagers, radar imaging, lidar scanners, magnetic resonance images, side-scan sonar, synthetic aperture sonar, etc. Such hardware captures "images" that are then processed often using the same computer vision algorithms used to process visible-light images. While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances in digital signal processing and consumer graphics hardware has made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to thousands of frames per second. For applications in robotics, fast, real-time video systems are critically important and often can simplify the processing needed for certain algorithms. When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be realized. Egocentric vision systems are composed of a wearable camera that automatically take pictures from a first-person perspective. As of 2016, vision processing units are emerging as a new class of processors to complement CPUs and graphics processing units (GPUs) in this role. See also Chessboard detection Computational imaging Computational photography Computer audition Egocentric vision Machine vision glossary Space mapping Teknomo–Fernandez algorithm Vision science Visual agnosia Visual perception Visual system Lists Outline of computer vision List of emerging technologies Outline of artificial intelligence References Further reading External links USC Iris computer vision conference list Computer vision papers on the web – a complete list of papers of the most relevant computer vision conferences. Computer Vision Online – news, source code, datasets and job offers related to computer vision CVonline – Bob Fisher's Compendium of Computer Vision. British Machine Vision Association – supporting computer vision research within the UK via the BMVC and MIUA conferences, Annals of the BMVA (open-source journal), BMVA Summer School and one-day meetings Computer Vision Container, Joe Hoeller GitHub: Widely adopted open-source container for GPU accelerated computer vision applications. Used by researchers, universities, private companies, as well as the U.S. Gov't. Image processing Packaging machinery Articles containing video clips
Computer vision
[ "Engineering" ]
6,599
[ "Artificial intelligence engineering", "Packaging machinery", "Industrial machinery", "Computer vision" ]
6,617
https://en.wikipedia.org/wiki/Compactification%20%28mathematics%29
In mathematics, in general topology, compactification is the process or result of making a topological space into a compact space. A compact space is a space in which every open cover of the space contains a finite subcover. The methods of compactification are various, but each is a way of controlling points from "going off to infinity" by in some way adding "points at infinity" or preventing such an "escape". An example Consider the real line with its ordinary topology. This space is not compact; in a sense, points can go off to infinity to the left or to the right. It is possible to turn the real line into a compact space by adding a single "point at infinity" which we will denote by ∞. The resulting compactification is homeomorphic to a circle in the plane (which, as a closed and bounded subset of the Euclidean plane, is compact). Every sequence that ran off to infinity in the real line will then converge to ∞ in this compactification. The direction in which a number approaches infinity on the number line (either in the - direction or + direction) is still preserved on the circle; for if a number approaches towards infinity from the - direction on the number line, then the corresponding point on the circle can approach ∞ from the left for example. Then if a number approaches infinity from the + direction on the number line, then the corresponding point on the circle can approach ∞ from the right. Intuitively, the process can be pictured as follows: first shrink the real line to the open interval on the x-axis; then bend the ends of this interval upwards (in positive y-direction) and move them towards each other, until you get a circle with one point (the topmost one) missing. This point is our new point ∞ "at infinity"; adding it in completes the compact circle. A bit more formally: we represent a point on the unit circle by its angle, in radians, going from − to for simplicity. Identify each such point θ on the circle with the corresponding point on the real line tan(θ/2). This function is undefined at the point , since tan(/2) is undefined; we will identify this point with our point ∞. Since tangents and inverse tangents are both continuous, our identification function is a homeomorphism between the real line and the unit circle without ∞. What we have constructed is called the Alexandroff one-point compactification of the real line, discussed in more generality below. It is also possible to compactify the real line by adding two points, +∞ and −∞; this results in the extended real line. Definition An embedding of a topological space X as a dense subset of a compact space is called a compactification of X. It is often useful to embed topological spaces in compact spaces, because of the special properties compact spaces have. Embeddings into compact Hausdorff spaces may be of particular interest. Since every compact Hausdorff space is a Tychonoff space, and every subspace of a Tychonoff space is Tychonoff, we conclude that any space possessing a Hausdorff compactification must be a Tychonoff space. In fact, the converse is also true; being a Tychonoff space is both necessary and sufficient for possessing a Hausdorff compactification. The fact that large and interesting classes of non-compact spaces do in fact have compactifications of particular sorts makes compactification a common technique in topology. Alexandroff one-point compactification For any noncompact topological space X the (Alexandroff) one-point compactification αX of X is obtained by adding one extra point ∞ (often called a point at infinity) and defining the open sets of the new space to be the open sets of X together with the sets of the form G ∪ , where G is an open subset of X such that is closed and compact. The one-point compactification of X is Hausdorff if and only if X is Hausdorff and locally compact. Stone–Čech compactification Of particular interest are Hausdorff compactifications, i.e., compactifications in which the compact space is Hausdorff. A topological space has a Hausdorff compactification if and only if it is Tychonoff. In this case, there is a unique (up to homeomorphism) "most general" Hausdorff compactification, the Stone–Čech compactification of X, denoted by βX; formally, this exhibits the category of Compact Hausdorff spaces and continuous maps as a reflective subcategory of the category of Tychonoff spaces and continuous maps. "Most general" or formally "reflective" means that the space βX is characterized by the universal property that any continuous function from X to a compact Hausdorff space K can be extended to a continuous function from βX to K in a unique way. More explicitly, βX is a compact Hausdorff space containing X such that the induced topology on X by βX is the same as the given topology on X, and for any continuous map , where K is a compact Hausdorff space, there is a unique continuous map for which g restricted to X is identically f. The Stone–Čech compactification can be constructed explicitly as follows: let C be the set of continuous functions from X to the closed interval . Then each point in X can be identified with an evaluation function on C. Thus X can be identified with a subset of , the space of all functions from C to . Since the latter is compact by Tychonoff's theorem, the closure of X as a subset of that space will also be compact. This is the Stone–Čech compactification. Spacetime compactification Walter Benz and Isaak Yaglom have shown how stereographic projection onto a single-sheet hyperboloid can be used to provide a compactification for split complex numbers. In fact, the hyperboloid is part of a quadric in real projective four-space. The method is similar to that used to provide a base manifold for group action of the conformal group of spacetime. Projective space Real projective space RPn is a compactification of Euclidean space Rn. For each possible "direction" in which points in Rn can "escape", one new point at infinity is added (but each direction is identified with its opposite). The Alexandroff one-point compactification of R we constructed in the example above is in fact homeomorphic to RP1. Note however that the projective plane RP2 is not the one-point compactification of the plane R2 since more than one point is added. Complex projective space CPn is also a compactification of Cn; the Alexandroff one-point compactification of the plane C is (homeomorphic to) the complex projective line CP1, which in turn can be identified with a sphere, the Riemann sphere. Passing to projective space is a common tool in algebraic geometry because the added points at infinity lead to simpler formulations of many theorems. For example, any two different lines in RP2 intersect in precisely one point, a statement that is not true in R2. More generally, Bézout's theorem, which is fundamental in intersection theory, holds in projective space but not affine space. This distinct behavior of intersections in affine space and projective space is reflected in algebraic topology in the cohomology rings – the cohomology of affine space is trivial, while the cohomology of projective space is non-trivial and reflects the key features of intersection theory (dimension and degree of a subvariety, with intersection being Poincaré dual to the cup product). Compactification of moduli spaces generally require allowing certain degeneracies – for example, allowing certain singularities or reducible varieties. This is notably used in the Deligne–Mumford compactification of the moduli space of algebraic curves. Compactification and discrete subgroups of Lie groups In the study of discrete subgroups of Lie groups, the quotient space of cosets is often a candidate for more subtle compactification to preserve structure at a richer level than just topological. For example, modular curves are compactified by the addition of single points for each cusp, making them Riemann surfaces (and so, since they are compact, algebraic curves). Here the cusps are there for a good reason: the curves parametrize a space of lattices, and those lattices can degenerate ('go off to infinity'), often in a number of ways (taking into account some auxiliary structure of level). The cusps stand in for those different 'directions to infinity'. That is all for lattices in the plane. In -dimensional Euclidean space the same questions can be posed, for example about This is harder to compactify. There are a variety of compactifications, such as the Borel–Serre compactification, the reductive Borel–Serre compactification, and the Satake compactifications, that can be formed. Other compactification theories The theories of ends of a space and prime ends. Some 'boundary' theories such as the collaring of an open manifold, Martin boundary, Shilov boundary and Furstenberg boundary. The Bohr compactification of a topological group arises from the consideration of almost periodic functions. The projective line over a ring for a topological ring may compactify it. The Baily–Borel compactification of a quotient of a Hermitian symmetric space. The wonderful compactification of a quotient of algebraic groups. The compactifications that are simultaneously convex subsets in a locally convex space are called convex compactifications, their additional linear structure allowing e.g. for developing a differential calculus and more advanced considerations e.g. in relaxation in variational calculus or optimization theory. See also References
Compactification (mathematics)
[ "Mathematics" ]
2,042
[ "Topology", "Compactification (mathematics)" ]
6,620
https://en.wikipedia.org/wiki/Cotangent%20space
In differential geometry, the cotangent space is a vector space associated with a point on a smooth (or differentiable) manifold ; one can define a cotangent space for every point on a smooth manifold. Typically, the cotangent space, is defined as the dual space of the tangent space at , , although there are more direct definitions (see below). The elements of the cotangent space are called cotangent vectors or tangent covectors. Properties All cotangent spaces at points on a connected manifold have the same dimension, equal to the dimension of the manifold. All the cotangent spaces of a manifold can be "glued together" (i.e. unioned and endowed with a topology) to form a new differentiable manifold of twice the dimension, the cotangent bundle of the manifold. The tangent space and the cotangent space at a point are both real vector spaces of the same dimension and therefore isomorphic to each other via many possible isomorphisms. The introduction of a Riemannian metric or a symplectic form gives rise to a natural isomorphism between the tangent space and the cotangent space at a point, associating to any tangent covector a canonical tangent vector. Formal definitions Definition as linear functionals Let be a smooth manifold and let be a point in . Let be the tangent space at . Then the cotangent space at x is defined as the dual space of Concretely, elements of the cotangent space are linear functionals on . That is, every element is a linear map where is the underlying field of the vector space being considered, for example, the field of real numbers. The elements of are called cotangent vectors. Alternative definition In some cases, one might like to have a direct definition of the cotangent space without reference to the tangent space. Such a definition can be formulated in terms of equivalence classes of smooth functions on . Informally, we will say that two smooth functions f and g are equivalent at a point if they have the same first-order behavior near , analogous to their linear Taylor polynomials; two functions f and g have the same first order behavior near if and only if the derivative of the function f − g vanishes at . The cotangent space will then consist of all the possible first-order behaviors of a function near . Let be a smooth manifold and let x be a point in . Let be the ideal of all functions in vanishing at , and let be the set of functions of the form , where . Then and are both real vector spaces and the cotangent space can be defined as the quotient space by showing that the two spaces are isomorphic to each other. This formulation is analogous to the construction of the cotangent space to define the Zariski tangent space in algebraic geometry. The construction also generalizes to locally ringed spaces. The differential of a function Let be a smooth manifold and let be a smooth function. The differential of at a point is the map where is a tangent vector at , thought of as a derivation. That is is the Lie derivative of in the direction , and one has . Equivalently, we can think of tangent vectors as tangents to curves, and write In either case, is a linear map on and hence it is a tangent covector at . We can then define the differential map at a point as the map which sends to . Properties of the differential map include: is a linear map: for constants and , The differential map provides the link between the two alternate definitions of the cotangent space given above. Since for all there exist such that , we have, i.e. All function in have differential zero, it follows that for every two functions , , we have . We can now construct an isomorphism between and by sending linear maps to the corresponding cosets . Since there is a unique linear map for a given kernel and slope, this is an isomorphism, establishing the equivalence of the two definitions. The pullback of a smooth map Just as every differentiable map between manifolds induces a linear map (called the pushforward or derivative) between the tangent spaces every such map induces a linear map (called the pullback) between the cotangent spaces, only this time in the reverse direction: The pullback is naturally defined as the dual (or transpose) of the pushforward. Unraveling the definition, this means the following: where and . Note carefully where everything lives. If we define tangent covectors in terms of equivalence classes of smooth maps vanishing at a point then the definition of the pullback is even more straightforward. Let be a smooth function on vanishing at . Then the pullback of the covector determined by (denoted ) is given by That is, it is the equivalence class of functions on vanishing at determined by . Exterior powers The -th exterior power of the cotangent space, denoted , is another important object in differential and algebraic geometry. Vectors in the -th exterior power, or more precisely sections of the -th exterior power of the cotangent bundle, are called differential -forms. They can be thought of as alternating, multilinear maps on tangent vectors. For this reason, tangent covectors are frequently called one-forms. References Differential topology Tensors
Cotangent space
[ "Mathematics", "Engineering" ]
1,077
[ "Tensors", "Topology", "Differential topology" ]
6,670
https://en.wikipedia.org/wiki/Cement
A cement is a binder, a chemical substance used for construction that sets, hardens, and adheres to other materials to bind them together. Cement is seldom used on its own, but rather to bind sand and gravel (aggregate) together. Cement mixed with fine aggregate produces mortar for masonry, or with sand and gravel, produces concrete. Concrete is the most widely used material in existence and is behind only water as the planet's most-consumed resource. Cements used in construction are usually inorganic, often lime- or calcium silicate-based, and are either hydraulic or less commonly non-hydraulic, depending on the ability of the cement to set in the presence of water (see hydraulic and non-hydraulic lime plaster). Hydraulic cements (e.g., Portland cement) set and become adhesive through a chemical reaction between the dry ingredients and water. The chemical reaction results in mineral hydrates that are not very water-soluble. This allows setting in wet conditions or under water and further protects the hardened material from chemical attack. The chemical process for hydraulic cement was found by ancient Romans who used volcanic ash (pozzolana) with added lime (calcium oxide). Non-hydraulic cement (less common) does not set in wet conditions or under water. Rather, it sets as it dries and reacts with carbon dioxide in the air. It is resistant to attack by chemicals after setting. The word "cement" can be traced back to the Ancient Roman term , used to describe masonry resembling modern concrete that was made from crushed rock with burnt lime as binder. The volcanic ash and pulverized brick supplements that were added to the burnt lime, to obtain a hydraulic binder, were later referred to as , , cäment, and cement. In modern times, organic polymers are sometimes used as cements in concrete. World production of cement is about 4.4 billion tonnes per year (2021, estimation), of which about half is made in China, followed by India and Vietnam. The cement production process is responsible for nearly 8% (2018) of global emissions, which includes heating raw materials in a cement kiln by fuel combustion and release of stored in the calcium carbonate (calcination process). Its hydrated products, such as concrete, gradually reabsorb atmospheric (carbonation process), compensating for approximately 30% of the initial emissions. Chemistry Cement materials can be classified into two distinct categories: hydraulic cements and non-hydraulic cements according to their respective setting and hardening mechanisms. Hydraulic cement setting and hardening involves hydration reactions and therefore requires water, while non-hydraulic cements only react with a gas and can directly set under air. Hydraulic cement By far the most common type of cement is hydraulic cement, which hardens by hydration of the clinker minerals when water is added. Hydraulic cements (such as Portland cement) are made of a mixture of silicates and oxides, the four main mineral phases of the clinker, abbreviated in the cement chemist notation, being: C3S: alite (3CaO·SiO2); C2S: belite (2CaO·SiO2); C3A: tricalcium aluminate (3CaO·Al2O3) (historically, and still occasionally, called celite); C4AF: brownmillerite (4CaO·Al2O3·Fe2O3). The silicates are responsible for the cement's mechanical properties — the tricalcium aluminate and brownmillerite are essential for the formation of the liquid phase during the sintering (firing) process of clinker at high temperature in the kiln. The chemistry of these reactions is not completely clear and is still the object of research. First, the limestone (calcium carbonate) is burned to remove its carbon, producing lime (calcium oxide) in what is known as a calcination reaction. This single chemical reaction is a major emitter of global carbon dioxide emissions. CaCO3 -> CaO + CO2 The lime reacts with silicon dioxide to produce dicalcium silicate and tricalcium silicate. 2CaO + SiO2 -> 2CaO.SiO2 3CaO + SiO2 -> 3CaO.SiO2 The lime also reacts with aluminium oxide to form tricalcium aluminate. 3CaO + Al2O3 -> 3CaO.Al2O3 In the last step, calcium oxide, aluminium oxide, and ferric oxide react together to form brownmillerite. 4CaO + Al2O3 + Fe2O3 -> 4CaO.Al2O3.Fe2O3 Non-hydraulic cement A less common form of cement is non-hydraulic cement, such as slaked lime (calcium oxide mixed with water), which hardens by carbonation in contact with carbon dioxide, which is present in the air (~ 412 vol. ppm ≃ 0.04 vol. %). First calcium oxide (lime) is produced from calcium carbonate (limestone or chalk) by calcination at temperatures above 825 °C (1,517 °F) for about 10 hours at atmospheric pressure: CaCO3 -> CaO + CO2 The calcium oxide is then spent (slaked) by mixing it with water to make slaked lime (calcium hydroxide): CaO + H2O -> Ca(OH)2 Once the excess water is completely evaporated (this process is technically called setting), the carbonation starts: Ca(OH)2 + CO2 -> CaCO3 + H2O This reaction is slow, because the partial pressure of carbon dioxide in the air is low (~ 0.4 millibar). The carbonation reaction requires that the dry cement be exposed to air, so the slaked lime is a non-hydraulic cement and cannot be used under water. This process is called the lime cycle. History Perhaps the earliest known occurrence of cement is from twelve million years ago. A deposit of cement was formed after an occurrence of oil shale located adjacent to a bed of limestone burned by natural causes. These ancient deposits were investigated in the 1960s and 1970s. Alternatives to cement used in antiquity Cement, chemically speaking, is a product that includes lime as the primary binding ingredient, but is far from the first material used for cementation. The Babylonians and Assyrians used bitumen (asphalt or pitch) to bind together burnt brick or alabaster slabs. In Ancient Egypt, stone blocks were cemented together with a mortar made of sand and roughly burnt gypsum (CaSO4 · 2H2O), which is plaster of Paris, which often contained calcium carbonate (CaCO3), Ancient Greece and Rome Lime (calcium oxide) was used on Crete and by the Ancient Greeks. There is evidence that the Minoans of Crete used crushed potsherds as an artificial pozzolan for hydraulic cement. Nobody knows who first discovered that a combination of hydrated non-hydraulic lime and a pozzolan produces a hydraulic mixture (see also: Pozzolanic reaction), but such concrete was used by the Greeks, specifically the Ancient Macedonians, and three centuries later on a large scale by Roman engineers. The Greeks used volcanic tuff from the island of Thera as their pozzolan and the Romans used crushed volcanic ash (activated aluminium silicates) with lime. This mixture could set under water, increasing its resistance to corrosion like rust. The material was called pozzolana from the town of Pozzuoli, west of Naples where volcanic ash was extracted. In the absence of pozzolanic ash, the Romans used powdered brick or pottery as a substitute and they may have used crushed tiles for this purpose before discovering natural sources near Rome. The huge dome of the Pantheon in Rome and the massive Baths of Caracalla are examples of ancient structures made from these concretes, many of which still stand. The vast system of Roman aqueducts also made extensive use of hydraulic cement. Roman concrete was rarely used on the outside of buildings. The normal technique was to use brick facing material as the formwork for an infill of mortar mixed with an aggregate of broken pieces of stone, brick, potsherds, recycled chunks of concrete, or other building rubble. Mesoamerica Lightweight concrete was designed and used for the construction of structural elements by the pre-Columbian builders who lived in a very advanced civilisation in El Tajin near Mexico City, in Mexico. A detailed study of the composition of the aggregate and binder show that the aggregate was pumice and the binder was a pozzolanic cement made with volcanic ash and lime. Middle Ages Any preservation of this knowledge in literature from the Middle Ages is unknown, but medieval masons and some military engineers actively used hydraulic cement in structures such as canals, fortresses, harbors, and shipbuilding facilities. A mixture of lime mortar and aggregate with brick or stone facing material was used in the Eastern Roman Empire as well as in the West into the Gothic period. The German Rhineland continued to use hydraulic mortar throughout the Middle Ages, having local pozzolana deposits called trass. 16th century Tabby is a building material made from oyster shell lime, sand, and whole oyster shells to form a concrete. The Spanish introduced it to the Americas in the sixteenth century. 18th century The technical knowledge for making hydraulic cement was formalized by French and British engineers in the 18th century. John Smeaton made an important contribution to the development of cements while planning the construction of the third Eddystone Lighthouse (1755–59) in the English Channel now known as Smeaton's Tower. He needed a hydraulic mortar that would set and develop some strength in the twelve-hour period between successive high tides. He performed experiments with combinations of different limestones and additives including trass and pozzolanas and did exhaustive market research on the available hydraulic limes, visiting their production sites, and noted that the "hydraulicity" of the lime was directly related to the clay content of the limestone used to make it. Smeaton was a civil engineer by profession, and took the idea no further. In the South Atlantic seaboard of the United States, tabby relying on the oyster-shell middens of earlier Native American populations was used in house construction from the 1730s to the 1860s. In Britain particularly, good quality building stone became ever more expensive during a period of rapid growth, and it became a common practice to construct prestige buildings from the new industrial bricks, and to finish them with a stucco to imitate stone. Hydraulic limes were favored for this, but the need for a fast set time encouraged the development of new cements. Most famous was Parker's "Roman cement". This was developed by James Parker in the 1780s, and finally patented in 1796. It was, in fact, nothing like material used by the Romans, but was a "natural cement" made by burning septaria – nodules that are found in certain clay deposits, and that contain both clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder. This product, made into a mortar with sand, set in 5–15 minutes. The success of "Roman cement" led other manufacturers to develop rival products by burning artificial hydraulic lime cements of clay and chalk. Roman cement quickly became popular but was largely replaced by Portland cement in the 1850s. 19th century Apparently unaware of Smeaton's work, the same principle was identified by Frenchman Louis Vicat in the first decade of the nineteenth century. Vicat went on to devise a method of combining chalk and clay into an intimate mixture, and, burning this, produced an "artificial cement" in 1817 considered the "principal forerunner" of Portland cement and "...Edgar Dobbs of Southwark patented a cement of this kind in 1811." In Russia, Egor Cheliev created a new binder by mixing lime and clay. His results were published in 1822 in his book A Treatise on the Art to Prepare a Good Mortar published in St. Petersburg. A few years later in 1825, he published another book, which described various methods of making cement and concrete, and the benefits of cement in the construction of buildings and embankments. Portland cement, the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-speciality grout, was developed in England in the mid 19th century, and usually originates from limestone. James Frost produced what he called "British cement" in a similar manner around the same time, but did not obtain a patent until 1822. In 1824, Joseph Aspdin patented a similar material, which he called Portland cement, because the render made from it was in color similar to the prestigious Portland stone quarried on the Isle of Portland, Dorset, England. However, Aspdins' cement was nothing like modern Portland cement but was a first step in its development, called a proto-Portland cement. Joseph Aspdins' son William Aspdin had left his father's company and in his cement manufacturing apparently accidentally produced calcium silicates in the 1840s, a middle step in the development of Portland cement. William Aspdin's innovation was counterintuitive for manufacturers of "artificial cements", because they required more lime in the mix (a problem for his father), a much higher kiln temperature (and therefore more fuel), and the resulting clinker was very hard and rapidly wore down the millstones, which were the only available grinding technology of the time. Manufacturing costs were therefore considerably higher, but the product set reasonably slowly and developed strength quickly, thus opening up a market for use in concrete. The use of concrete in construction grew rapidly from 1850 onward, and was soon the dominant use for cements. Thus Portland cement began its predominant role. Isaac Charles Johnson further refined the production of meso-Portland cement (middle stage of development) and claimed he was the real father of Portland cement. Setting time and "early strength" are important characteristics of cements. Hydraulic limes, "natural" cements, and "artificial" cements all rely on their belite (2 CaO · SiO2, abbreviated as C2S) content for strength development. Belite develops strength slowly. Because they were burned at temperatures below , they contained no alite (3 CaO · SiO2, abbreviated as C3S), which is responsible for early strength in modern cements. The first cement to consistently contain alite was made by William Aspdin in the early 1840s: This was what we call today "modern" Portland cement. Because of the air of mystery with which William Aspdin surrounded his product, others (e.g., Vicat and Johnson) have claimed precedence in this invention, but recent analysis of both his concrete and raw cement have shown that William Aspdin's product made at Northfleet, Kent was a true alite-based cement. However, Aspdin's methods were "rule-of-thumb": Vicat is responsible for establishing the chemical basis of these cements, and Johnson established the importance of sintering the mix in the kiln. In the US the first large-scale use of cement was Rosendale cement, a natural cement mined from a massive deposit of dolomite discovered in the early 19th century near Rosendale, New York. Rosendale cement was extremely popular for the foundation of buildings (e.g., Statue of Liberty, Capitol Building, Brooklyn Bridge) and lining water pipes. Sorel cement, or magnesia-based cement, was patented in 1867 by the Frenchman Stanislas Sorel. It was stronger than Portland cement but its poor water resistance (leaching) and corrosive properties (pitting corrosion due to the presence of leachable chloride anions and the low pH (8.5–9.5) of its pore water) limited its use as reinforced concrete for building construction. The next development in the manufacture of Portland cement was the introduction of the rotary kiln. It produced a clinker mixture that was both stronger, because more alite (C3S) is formed at the higher temperature it achieved (1450 °C), and more homogeneous. Because raw material is constantly fed into a rotary kiln, it allowed a continuous manufacturing process to replace lower capacity batch production processes. 20th century Calcium aluminate cements were patented in 1908 in France by Jules Bied for better resistance to sulfates. Also in 1908, Thomas Edison experimented with pre-cast concrete in houses in Union, N.J. In the US, after World War One, the long curing time of at least a month for Rosendale cement made it unpopular for constructing highways and bridges, and many states and construction firms turned to Portland cement. Because of the switch to Portland cement, by the end of the 1920s only one of the 15 Rosendale cement companies had survived. But in the early 1930s, builders discovered that, while Portland cement set faster, it was not as durable, especially for highways—to the point that some states stopped building highways and roads with cement. Bertrain H. Wait, an engineer whose company had helped construct the New York City's Catskill Aqueduct, was impressed with the durability of Rosendale cement, and came up with a blend of both Rosendale and Portland cements that had the good attributes of both. It was highly durable and had a much faster setting time. Wait convinced the New York Commissioner of Highways to construct an experimental section of highway near New Paltz, New York, using one sack of Rosendale to six sacks of Portland cement. It was a success, and for decades the Rosendale-Portland cement blend was used in concrete highway and concrete bridge construction. Cementitious materials have been used as a nuclear waste immobilizing matrix for more than a half-century. Technologies of waste cementation have been developed and deployed at industrial scale in many countries. Cementitious wasteforms require a careful selection and design process adapted to each specific type of waste to satisfy the strict waste acceptance criteria for long-term storage and disposal. Types Modern development of hydraulic cement began with the start of the Industrial Revolution (around 1800), driven by three main needs: Hydraulic cement render (stucco) for finishing brick buildings in wet climates Hydraulic mortars for masonry construction of harbor works, etc., in contact with sea water Development of strong concretes Modern cements are often Portland cement or Portland cement blends, but other cement blends are used in some industrial settings. Portland cement Portland cement, a form of hydraulic cement, is by far the most common type of cement in general use around the world. This cement is made by heating limestone (calcium carbonate) with other materials (such as clay) to in a kiln, in a process known as calcination that liberates a molecule of carbon dioxide from the calcium carbonate to form calcium oxide, or quicklime, which then chemically combines with the other materials in the mix to form calcium silicates and other cementitious compounds. The resulting hard substance, called 'clinker', is then ground with a small amount of gypsum () into a powder to make ordinary Portland cement, the most commonly used type of cement (often referred to as OPC). Portland cement is a basic ingredient of concrete, mortar, and most non-specialty grout. The most common use for Portland cement is to make concrete. Portland cement may be grey or white. Portland cement blend Portland cement blends are often available as inter-ground mixtures from cement producers, but similar formulations are often also mixed from the ground components at the concrete mixing plant. Portland blast-furnace slag cement, or blast furnace cement (ASTM C595 and EN 197-1 nomenclature respectively), contains up to 95% ground granulated blast furnace slag, with the rest Portland clinker and a little gypsum. All compositions produce high ultimate strength, but as slag content is increased, early strength is reduced, while sulfate resistance increases and heat evolution diminishes. Used as an economic alternative to Portland sulfate-resisting and low-heat cements. Portland-fly ash cement contains up to 40% fly ash under ASTM standards (ASTM C595), or 35% under EN standards (EN 197–1). The fly ash is pozzolanic, so that ultimate strength is maintained. Because fly ash addition allows a lower concrete water content, early strength can also be maintained. Where good quality cheap fly ash is available, this can be an economic alternative to ordinary Portland cement. Portland pozzolan cement includes fly ash cement, since fly ash is a pozzolan, but also includes cements made from other natural or artificial pozzolans. In countries where volcanic ashes are available (e.g., Italy, Chile, Mexico, the Philippines), these cements are often the most common form in use. The maximum replacement ratios are generally defined as for Portland-fly ash cement. Portland silica fume cement. Addition of silica fume can yield exceptionally high strengths, and cements containing 5–20% silica fume are occasionally produced, with 10% being the maximum allowed addition under EN 197–1. However, silica fume is more usually added to Portland cement at the concrete mixer. Masonry cements are used for preparing bricklaying mortars and stuccos, and must not be used in concrete. They are usually complex proprietary formulations containing Portland clinker and a number of other ingredients that may include limestone, hydrated lime, air entrainers, retarders, waterproofers, and coloring agents. They are formulated to yield workable mortars that allow rapid and consistent masonry work. Subtle variations of masonry cement in North America are plastic cements and stucco cements. These are designed to produce a controlled bond with masonry blocks. Expansive cements contain, in addition to Portland clinker, expansive clinkers (usually sulfoaluminate clinkers), and are designed to offset the effects of drying shrinkage normally encountered in hydraulic cements. This cement can make concrete for floor slabs (up to 60 m square) without contraction joints. White blended cements may be made using white clinker (containing little or no iron) and white supplementary materials such as high-purity metakaolin. Colored cements serve decorative purposes. Some standards allow the addition of pigments to produce colored Portland cement. Other standards (e.g., ASTM) do not allow pigments in Portland cement, and colored cements are sold as blended hydraulic cements. Very finely ground cements are cement mixed with sand or with slag or other pozzolan type minerals that are extremely finely ground together. Such cements can have the same physical characteristics as normal cement but with 50% less cement, particularly because there is more surface area for the chemical reaction. Even with intensive grinding they can use up to 50% less energy (and thus less carbon emissions) to fabricate than ordinary Portland cements. Other Pozzolan-lime cements are mixtures of ground pozzolan and lime. These are the cements the Romans used, and are present in surviving Roman structures like the Pantheon in Rome. They develop strength slowly, but their ultimate strength can be very high. The hydration products that produce strength are essentially the same as those in Portland cement. Slag-lime cements—ground granulated blast-furnace slag—are not hydraulic on their own, but are "activated" by addition of alkalis, most economically using lime. They are similar to pozzolan lime cements in their properties. Only granulated slag (i.e., water-quenched, glassy slag) is effective as a cement component. Supersulfated cements contain about 80% ground granulated blast furnace slag, 15% gypsum or anhydrite and a little Portland clinker or lime as an activator. They produce strength by formation of ettringite, with strength growth similar to a slow Portland cement. They exhibit good resistance to aggressive agents, including sulfate. Calcium aluminate cements are hydraulic cements made primarily from limestone and bauxite. The active ingredients are monocalcium aluminate CaAl2O4 (CaO · Al2O3 or CA in cement chemist notation, CCN) and mayenite Ca12Al14O33 (12 CaO · 7 Al2O3, or C12A7 in CCN). Strength forms by hydration to calcium aluminate hydrates. They are well-adapted for use in refractory (high-temperature resistant) concretes, e.g., for furnace linings. Calcium sulfoaluminate cements are made from clinkers that include ye'elimite (Ca4(AlO2)6SO4 or C4A3 in Cement chemist's notation) as a primary phase. They are used in expansive cements, in ultra-high early strength cements, and in "low-energy" cements. Hydration produces ettringite, and specialized physical properties (such as expansion or rapid reaction) are obtained by adjustment of the availability of calcium and sulfate ions. Their use as a low-energy alternative to Portland cement has been pioneered in China, where several million tonnes per year are produced. Energy requirements are lower because of the lower kiln temperatures required for reaction, and the lower amount of limestone (which must be endothermically decarbonated) in the mix. In addition, the lower limestone content and lower fuel consumption leads to a emission around half that associated with Portland clinker. However, SO2 emissions are usually significantly higher. "Natural" cements corresponding to certain cements of the pre-Portland era, are produced by burning argillaceous limestones at moderate temperatures. The level of clay components in the limestone (around 30–35%) is such that large amounts of belite (the low-early strength, high-late strength mineral in Portland cement) are formed without the formation of excessive amounts of free lime. As with any natural material, such cements have highly variable properties. Geopolymer cements are made from mixtures of water-soluble alkali metal silicates, and aluminosilicate mineral powders such as fly ash and metakaolin. Polymer cements are made from organic chemicals that polymerise. Producers often use thermoset materials. While they are often significantly more expensive, they can give a water proof material that has useful tensile strength. Sorel cement is a hard, durable cement made by combining magnesium oxide and a magnesium chloride solution Fiber mesh cement or fiber reinforced concrete is cement that is made up of fibrous materials like synthetic fibers, glass fibers, natural fibers, and steel fibers. This type of mesh is distributed evenly throughout the wet concrete. The purpose of fiber mesh is to reduce water loss from the concrete as well as enhance its structural integrity. When used in plasters, fiber mesh increases cohesiveness, tensile strength, impact resistance, and to reduce shrinkage; ultimately, the main purpose of these combined properties is to reduce cracking. Electric cement is proposed to be made by recycling cement from demolition wastes in an electric arc furnace as part of a steelmaking process. The recycled cement is intended to be used to replace part or all of the lime used in steelmaking, resulting in a slag-like material that is similar in mineralogy to Portland cement, eliminating most of the associated carbon emissions. Setting, hardening and curing Cement starts to set when mixed with water, which causes a series of hydration chemical reactions. The constituents slowly hydrate and the mineral hydrates solidify and harden. The interlocking of the hydrates gives cement its strength. Contrary to popular belief, hydraulic cement does not set by drying out — proper curing requires maintaining the appropriate moisture content necessary for the hydration reactions during the setting and the hardening processes. If hydraulic cements dry out during the curing phase, the resulting product can be insufficiently hydrated and significantly weakened. A minimum temperature of 5 °C is recommended, and no more than 30 °C. The concrete at young age must be protected against water evaporation due to direct insolation, elevated temperature, low relative humidity and wind. The interfacial transition zone (ITZ) is a region of the cement paste around the aggregate particles in concrete. In the zone, a gradual transition in the microstructural features occurs. This zone can be up to 35 micrometer wide. Other studies have shown that the width can be up to 50 micrometer. The average content of unreacted clinker phase decreases and porosity decreases towards the aggregate surface. Similarly, the content of ettringite increases in ITZ. Safety issues Bags of cement routinely have health and safety warnings printed on them because not only is cement highly alkaline, but the setting process is exothermic. As a result, wet cement is strongly caustic (pH = 13.5) and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. Some trace elements, such as chromium, from impurities naturally present in the raw materials used to produce cement may cause allergic dermatitis. Reducing agents such as ferrous sulfate (FeSO4) are often added to cement to convert the carcinogenic hexavalent chromate (CrO42−) into trivalent chromium (Cr3+), a less toxic chemical species. Cement users need also to wear appropriate gloves and protective clothing. Cement industry in the world In 2010, the world production of hydraulic cement was . The top three producers were China with 1,800, India with 220, and the United States with 63.5 million tonnes for a total of over half the world total by the world's three most populated states. For the world capacity to produce cement in 2010, the situation was similar with the top three states (China, India, and the US) accounting for just under half the world total capacity. Over 2011 and 2012, global consumption continued to climb, rising to 3585 Mt in 2011 and 3736 Mt in 2012, while annual growth rates eased to 8.3% and 4.2%, respectively. China, representing an increasing share of world cement consumption, remains the main engine of global growth. By 2012, Chinese demand was recorded at 2160 Mt, representing 58% of world consumption. Annual growth rates, which reached 16% in 2010, appear to have softened, slowing to 5–6% over 2011 and 2012, as China's economy targets a more sustainable growth rate. Outside of China, worldwide consumption climbed by 4.4% to 1462 Mt in 2010, 5% to 1535 Mt in 2011, and finally 2.7% to 1576 Mt in 2012. Iran is now the 3rd largest cement producer in the world and has increased its output by over 10% from 2008 to 2011. Because of climbing energy costs in Pakistan and other major cement-producing countries, Iran is in a unique position as a trading partner, utilizing its own surplus petroleum to power clinker plants. Now a top producer in the Middle-East, Iran is further increasing its dominant position in local markets and abroad. The performance in North America and Europe over the 2010–12 period contrasted strikingly with that of China, as the global financial crisis evolved into a sovereign debt crisis for many economies in this region and recession. Cement consumption levels for this region fell by 1.9% in 2010 to 445 Mt, recovered by 4.9% in 2011, then dipped again by 1.1% in 2012. The performance in the rest of the world, which includes many emerging economies in Asia, Africa and Latin America and representing some 1020 Mt cement demand in 2010, was positive and more than offset the declines in North America and Europe. Annual consumption growth was recorded at 7.4% in 2010, moderating to 5.1% and 4.3% in 2011 and 2012, respectively. As at year-end 2012, the global cement industry consisted of 5673 cement production facilities, including both integrated and grinding, of which 3900 were located in China and 1773 in the rest of the world. Total cement capacity worldwide was recorded at 5245 Mt in 2012, with 2950 Mt located in China and 2295 Mt in the rest of the world. China "For the past 18 years, China consistently has produced more cement than any other country in the world. [...] (However,) China's cement export peaked in 1994 with 11 million tonnes shipped out and has been in steady decline ever since. Only 5.18 million tonnes were exported out of China in 2002. Offered at $34 a ton, Chinese cement is pricing itself out of the market as Thailand is asking as little as $20 for the same quality." In 2006, it was estimated that China manufactured 1.235 billion tonnes of cement, which was 44% of the world total cement production. "Demand for cement in China is expected to advance 5.4% annually and exceed 1 billion tonnes in 2008, driven by slowing but healthy growth in construction expenditures. Cement consumed in China will amount to 44% of global demand, and China will remain the world's largest national consumer of cement by a large margin." In 2010, 3.3 billion tonnes of cement was consumed globally. Of this, China accounted for 1.8 billion tonnes. Environmental impacts Cement manufacture causes environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust, gases, noise and vibration when operating machinery and during blasting in quarries, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them. emissions Carbon concentration in cement spans from ≈5% in cement structures to ≈8% in the case of roads in cement. Cement manufacturing releases in the atmosphere both directly when calcium carbonate is heated, producing lime and carbon dioxide, and also indirectly through the use of energy if its production involves the emission of . The cement industry produces about 10% of global human-made emissions, of which 60% is from the chemical process, and 40% from burning fuel. A Chatham House study from 2018 estimates that the 4 billion tonnes of cement produced annually account for 8% of worldwide emissions. Nearly 900 kg of are emitted for every 1000 kg of Portland cement produced. In the European Union, the specific energy consumption for the production of cement clinker has been reduced by approximately 30% since the 1970s. This reduction in primary energy requirements is equivalent to approximately 11 million tonnes of coal per year with corresponding benefits in reduction of emissions. This accounts for approximately 5% of anthropogenic . The majority of carbon dioxide emissions in the manufacture of Portland cement (approximately 60%) are produced from the chemical decomposition of limestone to lime, an ingredient in Portland cement clinker. These emissions may be reduced by lowering the clinker content of cement. They can also be reduced by alternative fabrication methods such as the intergrinding cement with sand or with slag or other pozzolan type minerals to a very fine powder. To reduce the transport of heavier raw materials and to minimize the associated costs, it is more economical to build cement plants closer to the limestone quarries rather than to the consumer centers. carbon capture and storage is about to be trialed, but its financial viability is uncertain. absorption Hydrated products of Portland cement, such as concrete and mortars, slowly reabsorb atmospheric CO2 gas, which has been released during calcination in a kiln. This natural process, reversed to calcination, is called carbonation. As it depends on CO2 diffusion into the bulk of concrete, its rate depends on many parameters, such as environmental conditions and surface area exposed to the atmosphere. Carbonation is particularly significant at the latter stages of the concrete life - after demolition and crushing of the debris. It was estimated that during the whole life-cycle of cement products, it can be reabsorbed nearly 30% of atmospheric CO2 generated by cement production. Carbonation process is considered as a mechanism of concrete degradation. It reduces pH of concrete that promotes reinforcement steel corrosion. However, as the product of Ca(OH)2 carbonation, CaCO3, occupies a greater volume, porosity of concrete reduces. This increases strength and hardness of concrete. There are proposals to reduce carbon footprint of hydraulic cement by adopting non-hydraulic cement, lime mortar, for certain applications. It reabsorbs some of the during hardening, and has a lower energy requirement in production than Portland cement. A few other attempts to increase absorption of carbon dioxide include cements based on magnesium (Sorel cement). Heavy metal emissions in the air In some circumstances, mainly depending on the origin and the composition of the raw materials used, the high-temperature calcination process of limestone and clay minerals can release in the atmosphere gases and dust rich in volatile heavy metals, e.g. thallium, cadmium and mercury are the most toxic. Heavy metals (Tl, Cd, Hg, ...) and also selenium are often found as trace elements in common metal sulfides (pyrite (FeS2), zinc blende (ZnS), galena (PbS), ...) present as secondary minerals in most of the raw materials. Environmental regulations exist in many countries to limit these emissions. As of 2011 in the United States, cement kilns are "legally allowed to pump more toxins into the air than are hazardous-waste incinerators." Heavy metals present in the clinker The presence of heavy metals in the clinker arises both from the natural raw materials and from the use of recycled by-products or alternative fuels. The high pH prevailing in the cement porewater (12.5 < pH < 13.5) limits the mobility of many heavy metals by decreasing their solubility and increasing their sorption onto the cement mineral phases. Nickel, zinc and lead are commonly found in cement in non-negligible concentrations. Chromium may also directly arise as natural impurity from the raw materials or as secondary contamination from the abrasion of hard chromium steel alloys used in the ball mills when the clinker is ground. As chromate (CrO42−) is toxic and may cause severe skin allergies at trace concentration, it is sometimes reduced into trivalent Cr(III) by addition of ferrous sulfate (FeSO4). Use of alternative fuels and by-products materials A cement plant consumes 3 to 6 GJ of fuel per tonne of clinker produced, depending on the raw materials and the process used. Most cement kilns today use coal and petroleum coke as primary fuels, and to a lesser extent natural gas and fuel oil. Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln (referred to as co-processing), replacing a portion of conventional fossil fuels, like coal, if they meet strict specifications. Selected waste and by-products containing useful minerals such as calcium, silica, alumina, and iron can be used as raw materials in the kiln, replacing raw materials such as clay, shale, and limestone. Because some materials have both useful mineral content and recoverable calorific value, the distinction between alternative fuels and raw materials is not always clear. For example, sewage sludge has a low but significant calorific value, and burns to give ash containing minerals useful in the clinker matrix. Scrap automobile and truck tires are useful in cement manufacturing as they have high calorific value and the iron embedded in tires is useful as a feed stock. Clinker is manufactured by heating raw materials inside the main burner of a kiln to a temperature of 1,450 °C. The flame reaches temperatures of 1,800 °C. The material remains at 1,200 °C for 12–15 seconds at 1,800 °C or sometimes for 5–8 seconds (also referred to as residence time). These characteristics of a clinker kiln offer numerous benefits and they ensure a complete destruction of organic compounds, a total neutralization of acid gases, sulphur oxides and hydrogen chloride. Furthermore, heavy metal traces are embedded in the clinker structure and no by-products, such as ash or residues, are produced. The EU cement industry already uses more than 40% fuels derived from waste and biomass in supplying the thermal energy to the grey clinker making process. Although the choice for this so-called alternative fuels (AF) is typically cost driven, other factors are becoming more important. Use of alternative fuels provides benefits for both society and the company: -emissions are lower than with fossil fuels, waste can be co-processed in an efficient and sustainable manner and the demand for certain virgin materials can be reduced. Yet there are large differences in the share of alternative fuels used between the European Union (EU) member states. The societal benefits could be improved if more member states increase their alternative fuels share. The Ecofys study assessed the barriers and opportunities for further uptake of alternative fuels in 14 EU member states. The Ecofys study found that local factors constrain the market potential to a much larger extent than the technical and economic feasibility of the cement industry itself. Reduced-footprint cement Growing environmental concerns and the increasing cost of fossil fuels have resulted, in many countries, in a sharp reduction of the resources needed to produce cement, as well as effluents (dust and exhaust gases). Reduced-footprint cement is a cementitious material that meets or exceeds the functional performance capabilities of Portland cement. Various techniques are under development. One is geopolymer cement, which incorporates recycled materials, thereby reducing consumption of raw materials, water, and energy. Another approach is to reduce or eliminate the production and release of damaging pollutants and greenhouse gasses, particularly . Recycling old cement in electric arc furnaces is another approach. Also, a team at the University of Edinburgh has developed the 'DUPE' process based on the microbial activity of Sporosarcina pasteurii, a bacterium precipitating calcium carbonate, which, when mixed with sand and urine, can produce mortar blocks with a compressive strength 70% of that of concrete. An overview of climate-friendly methods for cement production can be found here. See also Asphalt concrete Calcium aluminate cements Cement chemist notation Cement render Cenocell Energetically modified cement (EMC) Fly ash Geopolymer cement Portland cement Rosendale cement Sulfate attack in concrete and mortar Sulfur concrete Tiocem List of countries by cement production References Further reading Friedrich W. Locher: Cement : Principles of production and use, Düsseldorf, Germany: Verlag Bau + Technik GmbH, 2006, Javed I. Bhatty, F. MacGregor Miller, Steven H. Kosmatka; editors: Innovations in Portland Cement Manufacturing, SP400, Portland Cement Association, Skokie, Illinois, U.S., 2004, "Why cement emissions matter for climate change" Carbon Brief 2018 External links Building materials Concrete
Cement
[ "Physics", "Engineering" ]
8,991
[ "Structural engineering", "Building engineering", "Construction", "Materials", "Building materials", "Concrete", "Matter", "Architecture" ]
6,759
https://en.wikipedia.org/wiki/Context-free%20grammar
In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form with a single nonterminal symbol, and a string of terminals and/or nonterminals ( can be empty). Regardless of which symbols surround it, the single nonterminal on the left hand side can always be replaced by on the right hand side. This distinguishes it from a context-sensitive grammar, which can have production rules in the form with a nonterminal symbol and , , and strings of terminal and/or nonterminal symbols. A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture, replaces with . There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol ("start symbol"). Nonterminal symbols are used during the derivation process, but do not appear in its final result string. Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). The language equality question (do two given context-free grammars generate the same language?) is undecidable. Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were invented by the linguist Noam Chomsky for this purpose. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the document type definition. In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF. Background Since at least the time of the ancient Indian scholar Pāṇini, linguists have described the grammars of languages in terms of their block structure, and described how sentences are recursively built up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence: John, whose blue car was in the garage, walked to the grocery store. can be logically parenthesized (with the logical metasymbols [ ]) as follows: [John[, [whose [blue car]] [was [in [the garage]]],]] [walked [to [the [grocery store]]]]. A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the "block structure" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such as agreement and reference are not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly. Context-free grammars are a special form of Semi-Thue systems that in their general form date back to the work of Axel Thue. The formalism of context-free grammars was developed in the mid-1950s by Noam Chomsky, and also their classification as a special type of formal grammar (which he called phrase-structure grammars). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars. In Chomsky's generative grammar framework, the syntax of natural language was described by context-free rules combined with transformation rules. Block structure was introduced into computer programming languages by the Algol project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known as Backus–Naur form, after two members of the Algol language design committee. The "block structure" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the "semantics" of the language. Context-free grammars are simple enough to allow the construction of efficient parsing algorithms that, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while the widely used LR and LL parsers are simpler algorithms that deal only with more restrictive subsets of context-free grammars. Formal definitions A context-free grammar is defined by the 4-tuple , where is a finite set; each element is called a nonterminal character or a variable. Each variable represents a different type of phrase or clause in the sentence. Variables are also sometimes called syntactic categories. Each variable defines a sub-language of the language defined by . is a finite set of terminals, disjoint from , which make up the actual content of the sentence. The set of terminals is the alphabet of the language defined by the grammar . is a finite relation in , where the asterisk represents the Kleene star operation. The members of are called the (rewrite) rules or productions of the grammar. (also commonly symbolized by a ) is the start variable (or start symbol), used to represent the whole sentence (or program). It must be an element of . Production rule notation A production rule in is formalized mathematically as a pair , where is a nonterminal and is a string of variables and/or terminals; rather than using ordered pair notation, production rules are usually written using an arrow operator with as its left hand side and as its right hand side: . It is allowed for to be the empty string, and in this case it is customary to denote it by ε. The form is called an -production. It is common to list all right-hand sides for the same left-hand side on the same line, using | (the vertical bar) to separate them. Rules and can hence be written as . In this case, and are called the first and second alternative, respectively. Rule application For any strings , we say directly yields , written as , if with and such that and . Thus, is a result of applying the rule to . Repetitive rule application For any strings we say yields or is derived from if there is a positive integer and strings such that . This relation is denoted , or in some textbooks. If , the relation holds. In other words, and are the reflexive transitive closure (allowing a string to yield itself) and the transitive closure (requiring at least one step) of , respectively. Context-free language The language of a grammar is the set of all terminal-symbol strings derivable from the start symbol. A language is said to be a context-free language (CFL), if there exists a CFG , such that . Non-deterministic pushdown automata recognize exactly the context-free languages. Examples Words concatenated with their reverse The grammar , with productions , , , is context-free. It is not proper since it includes an ε-production. A typical derivation in this grammar is . This makes it clear that . The language is context-free; however, it can be proved that it is not regular. If the productions , , are added, a context-free grammar for the set of all palindromes over the alphabet is obtained. Well-formed parentheses The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols "(" and ")" and one nonterminal symbol S. The production rules are , , The first rule allows the S symbol to multiply; the second rule allows the S symbol to become enclosed by matching parentheses; and the third rule terminates the recursion. Well-formed nested parentheses and square brackets A second canonical example is two different kinds of matching nested parentheses, described by the productions: with terminal symbols [ ] ( ) and nonterminal S. The following sequence can be derived in that grammar: Matching pairs In a context-free grammar, we can pair up characters the way we do with brackets. The simplest example: This grammar generates the language , which is not regular (according to the pumping lemma for regular languages). The special character ε stands for the empty string. By changing the above grammar to we obtain a grammar generating the language instead. This differs only in that it contains the empty string while the original grammar did not. Distinct number of a's and b's A context-free grammar for the language consisting of all strings over {a,b} containing an unequal number of a's and b's: Here, the nonterminal T can generate all strings with more a's than b's, the nonterminal U generates all strings with more b's than a's and the nonterminal V generates all strings with an equal number of a's and b's. Omitting the third alternative in the rules for T and U does not restrict the grammar's language. Second block of b's of double size Another example of a non-regular language is . It is context-free as it can be generated by the following context-free grammar: First-order logic formulas The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol. Examples of languages that are not context free In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced disregarding the other, where the two types need not nest inside one another, for example: or The fact that this language is not context free can be proven using pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form . Regular grammars Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular. The terminals here are and , while the only nonterminal is . The language described is all nonempty strings of s and s that end in . This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side. Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language. Using vertical bars, the grammar above can be described more tersely as follows: Derivations and syntax trees A derivation of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language. A derivation is fully determined by giving, for each step: the rule applied in that step the occurrence of its left-hand side to which it is applied For clarity, the intermediate string is usually given as well. For instance, with the grammar: the string can be derived from the start symbol with the following derivation: (by rule 1. on ) (by rule 1. on the second ) (by rule 2. on the first ) (by rule 2. on the second ) (by rule 3. on the third ) Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite: in a leftmost derivation, it is always the leftmost nonterminal; in a rightmost derivation, it is always the rightmost nonterminal. Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), which can be summarized as rule 1 rule 2 rule 1 rule 2 rule 3. One rightmost derivation is: (by rule 1 on the rightmost ) (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which can be summarized as rule 1 rule 1 rule 3 rule 2 rule 2. The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers. A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation outlined above, the structure of the string would be: where indicates a substring recognized as belonging to . This hierarchy can also be seen as a tree: This tree is called a parse tree or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 1 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which defines a string with a different structure and a different parse tree: Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows: (by rule 1 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called inherently ambiguous languages. Normal forms Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language. The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm). Closure properties Context-free languages are closed under the various operations, that is, if the languages K and L are context-free, so is the result of the following operations: union K ∪ L; concatenation K ∘ L; Kleene star L* substitution (in particular homomorphism) inverse homomorphism intersection with a regular language They are not closed under general intersection (hence neither under complementation) and set difference. Decidable problems The following are some decidable problems about context-free grammars. Parsing The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms: CYK algorithm (for grammars in Chomsky normal form) Earley parser GLR parser LL parser (only for the proper subclass of LL(k) grammars) Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to Boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728639). Conversely, Lillian Lee has shown O(n3−ε) Boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Reachability, productiveness, nullability A nonterminal symbol is called productive, or generating, if there is a derivation for some string of terminal symbols. is called reachable if there is a derivation for some strings of nonterminal and terminal symbols from the start symbol. is called useless if it is unreachable or unproductive. is called nullable if there is a derivation . A rule is called an ε-production. A derivation is called a cycle. Algorithms are known to eliminate from a given grammar, without changing its generated language, unproductive symbols, unreachable symbols, ε-productions, with one possible exception, and cycles. In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule. Such rules and alternatives are called useless. In the depicted example grammar, the nonterminal D is unreachable, and E is unproductive, while C → C causes a cycle. Hence, omitting the last three rules does not change the language generated by the grammar, nor does omitting the alternatives "| Cc | Ee" from the right-hand side of the rule for S. A context-free grammar is said to be proper if it has neither useless symbols nor ε-productions nor cycles. Combining the above algorithms, every context-free grammar not generating ε can be transformed into a weakly equivalent proper one. Regularity and LL(k) checks It is decidable whether a given grammar is a regular grammar, as well as whether it is an LL(k) grammar for a given k≥0. If k is not given, the latter problem is undecidable. Given a context-free grammar, it is not decidable whether its language is regular, nor whether it is an LL(k) language for a given k. Emptiness and finiteness There are algorithms to decide whether the language of a given context-free grammar is empty, as well as whether it is finite. Undecidable problems Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars. However, many problems are undecidable even for context-free grammars; the most prominent ones are handled in the following. Universality Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules? A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a computation history, a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input. Language equality Given two CFGs, do they generate the same language? The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings. Language inclusion Given two CFGs, can the first one generate all strings that the second one can generate? If this problem was decidable, then language equality could be decided too: two CFGs and generate the same language if is a subset of and is a subset of . Being in a lower or higher level of the Chomsky hierarchy Using Greibach's theorem, it can be shown that the two following problems are undecidable: Given a context-sensitive grammar, does it describe a context-free language? Given a context-free grammar, does it describe a regular language? Grammar ambiguity Given a CFG, is it ambiguous? The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, the Post correspondence problem could be decided, which is known to be undecidable. This may be proved by Ogden's lemma. Language disjointness Given two CFGs, is there any string derivable from both grammars? If this problem was decidable, the undecidable Post correspondence problem (PCP) could be decided, too: given strings over some alphabet , let the grammar consist of the rule ; where denotes the reversed string and does not occur among the ; and let grammar consist of the rule ; Then the PCP instance given by has a solution if and only if and share a derivable string. The left of the string (before the ) will represent the top of the solution for the PCP instance while the right side will be the bottom in reverse. Extensions An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such as agreement and reference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach include affix grammars, attribute grammars, indexed grammars, and Van Wijngaarden two-level grammars. Similar extensions exist in linguistics. An extended context-free grammar (or regular right part grammar) is one in which the right-hand side of the production rules is allowed to be a regular expression over the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages. Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism of context-sensitive grammars. Subclasses There are a number of important subclasses of the context-free grammars: LR(k) grammars (also known as deterministic context-free grammars) allow parsing (string recognition) with deterministic pushdown automata (PDA), but they can only describe deterministic context-free languages. Simple LR, Look-Ahead LR grammars are subclasses that allow further simplification of parsing. SLR and LALR are recognized using the same PDA as LR, but with simpler tables, in most cases. LL(k) and LL(*) grammars allow parsing by direct construction of a leftmost derivation as described above, and describe even fewer languages. Simple grammars are a subclass of the LL(1) grammars mostly interesting for its theoretical property that language equality of simple grammars is decidable, while language inclusion is not. Bracketed grammars have the property that the terminal symbols are divided into left and right bracket pairs that always match up in rules. Linear grammars have no rules with more than one nonterminal on the right-hand side. Regular grammars are a subclass of the linear grammars and describe the regular languages, i.e. they correspond to finite automata and regular expressions. LR parsing extends LL parsing to support a larger range of grammars; in turn, generalized LR parsing extends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while on nondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions and parser generators continue to be based on LL, LALR or LR parsing up to the present day. Linguistic applications Chomsky initially hoped to overcome the limitations of context-free grammars by adding transformation rules. Such rules are another standard device in traditional linguistics; e.g. passivization in English. Much of generative grammar has been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, being Turing complete unless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion). Chomsky's general position regarding the non-context-freeness of natural language has held up since then, although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved. Gerald Gazdar and Geoffrey Pullum have argued that despite a few non-context-free constructions in natural language (such as cross-serial dependencies in Swiss German and reduplication in Bambara), the vast majority of forms in natural language are indeed context-free. See also Parsing expression grammar Stochastic context-free grammar Algorithms for context-free grammar generation Pumping lemma for context-free languages References Notes Further reading Chapter 4: Context-Free Grammars, pp. 77–106; Chapter 6: Properties of Context-Free Languages, pp. 125–137. Chapter 2: Context-Free Grammars, pp. 91–122; Section 4.1.2: Decidable problems concerning context-free languages, pp. 156–159; Section 5.1.1: Reductions via computation histories: pp. 176–183. External links Computer programmers may find the stack exchange answer to be useful. CFG Developer created by Christopher Wong at Stanford University in 2014; modified by Kevin Gibbons in 2015. 1956 in computing Compiler construction Formal languages Programming language topics Wikipedia articles with ASCII art
Context-free grammar
[ "Mathematics", "Engineering" ]
5,834
[ "Software engineering", "Formal languages", "Mathematical logic", "Programming language topics" ]
10,888,763
https://en.wikipedia.org/wiki/Hoffmann%20kiln
The Hoffmann kiln is a series of batch process kilns. Hoffmann kilns are the most common kiln used in production of bricks and some other ceramic products. Patented by German Friedrich Hoffmann for brickmaking in 1858, it was later used for lime-burning, and was known as the Hoffmann continuous kiln. Construction and operation A Hoffmann kiln consists of a main fire passage surrounded on each side by several small rooms. Each room contains a pallet of bricks. In the main fire passage there is a fire wagon, that holds a fire that burns continuously. Each room is fired for a specific time, until the bricks are vitrified properly, and thereafter the fire wagon is rolled to the next room to be fired. Each room is connected to the next room by a passageway carrying hot gases from the fire. In this way, the hottest gases are directed into the room that is currently being fired. Then the gases pass into the adjacent room that is scheduled to be fired next. There the gases preheat the brick. As the gases pass through the kiln circuit, they gradually cool as they transfer heat to the brick as it is preheated and dried. This is essentially a counter-current heat exchanger, which makes for a very efficient use of heat and fuel. This efficiency is a principal advantage of the Hoffmann kiln, and is one of the reasons for its original development and continued use throughout history. In addition to the inner opening to the fire passage, each room also has an outside door, through which recently fired brick is removed, and replaced with wet brick to be dried and then fired in the next firing cycle. In a classic Hoffmann kiln, the fire may burn continuously for years, even decades; in Iran, there are kilns that are still active and have been working continuously for 35 years. Any fuel may be used in a Hoffmann kiln, including gasoline, natural gas, heavy petroleum and wood fuel. The dimensions of a typical Hoffmann kiln are completely variable, but in average about 5 m (height) x 15 m (width) x 150 m (length). Hoffmann kiln expansion The first kiln of this class was put into operation on November 22, 1859 in Scholwin (since 1946, Skolwin), near Stettin, which was then part of Prussia. In 1867 there were already 250 of them, most in the Prussian part of Germany, fifty in England and three in France. In Italy, their expansion began in 1870, after being shown at the Paris Exhibition. In September 1870, the first brick factory according to Hoffmann's patent was inaugurated in Australia. The first continuous Hoffmann system kilns installed in Spain would have been in 1880, near Madrid. In 1900 there were already more than 4,000 kilns of this type, distributed throughout Europe, Russia, the Americas, Africa and even the East Indies. In 1904, an oven according to the patent of the British William Sercombe and based on the Hoffmann model began operating in Palmerston North, New Zealand. Hoffman kilns are still in use for brick production in some parts of the world, especially in places where labor costs are low and modern technology is not easily accessible. Historic examples of Hoffmann kilns The Hoffmann kiln is used in almost every country. UK In the British Isles there are only a few Hoffmann kilns remaining, some of which have been preserved. The only ones with a chimney are at Prestongrange Industrial Heritage Museum and Llanymynech Heritage Area. The site at Llanymynech, close to Oswestry was used for lime-burning and has recently been partially restored as part of an industrial archaeology conservation project supported by English Heritage and the Heritage Lottery Fund. Two examples in North Yorkshire, the Hoffmann lime-burning kiln at Meal Bank Quarry, Ingleton, and that at the former Craven and Murgatroyd lime works, Langcliffe, are scheduled ancient monuments. There is an intact but abandoned Hoffmann kiln without a chimney present at Minera Limeworks; the site is abandoned but all entrances to the kiln have been grated-off, preventing access. The kiln is in a very poor state of repair, with trees growing out of the walls and the roof. Minera Quarry Trust hopes one day to develop the area into something of a tourist attraction. The Grade II listed Hoffmann brick kiln in Ilkeston, Derbyshire, is also badly neglected, although the recently installed fencing offers some protection for the building and for visitors. At Prestongrange Museum, outside Prestonpans in East Lothian, the Hoffman kiln is still standing and visitors can listen to more about it via a mobile phone tour. There is a nearly complete kiln in Horeb, Carmarthenshire. There is still a working kiln at Kings Dyke in Peterborough, which is the last site of the London Brick Company, owned by Forterra PLC. Australia In Victoria, Australia, at the Brunswick brickworks, there are two surviving kilns converted to residences, and a chimney from a third kiln; there is another in Box Hill, Victoria; also in Melbourne. In Adelaide, South Australia, the last remaining Hoffman kiln in the state is in at the old Hallett Brickworks site in Torrensville. There is one at St Peters in Sydney, New South Wales. In Western Australia, the kiln at the Maylands Brickworks in the Perth suburb of Maylands, which operated from 1927 to 1982 is the only remaining Hoffman kiln in the state. Catalonia Bòbila de Bellamar a Calafell. Other countries There is a complete kiln in the restored Tsalapatas brick Factory in Volos Greece that has been converted to an industrial museum. There are two in New Zealand. Kaohsiung city in Taiwan is also home to a Hoffman kiln, built by the Japanese government in 1899. References External links History of Hoffman Preston Grange tour site Evaluation of Hoffman Kiln Technology RCAHMS Canmore Industrial processes Kilns Lime kilns Firing techniques
Hoffmann kiln
[ "Chemistry", "Engineering" ]
1,235
[ "Chemical equipment", "Lime kilns", "Kilns" ]
10,894,247
https://en.wikipedia.org/wiki/Doublecortin
Neuronal migration protein doublecortin, also known as doublin or lissencephalin-X is a protein that in humans is encoded by the DCX gene. Function Doublecortin (DCX) is a microtubule-associated protein expressed by neuronal precursor cells and immature neurons in embryonic and adult cortical structures. Neuronal precursor cells begin to express DCX while actively dividing, and their neuronal daughter cells continue to express DCX for 2–3 weeks as the cells mature into neurons. Downregulation of DCX begins after 2 weeks, and occurs at the same time that these cells begin to express NeuN, a neuronal marker. Due to the nearly exclusive expression of DCX in developing neurons, this protein has been used increasingly as a marker for neurogenesis. Indeed, levels of DCX expression increase in response to exercise, and that increase occurs in parallel with increased BrdU labeling, which is currently a "gold standard" in measuring neurogenesis. Doublecortin was found to bind to the microtubule cytoskeleton. In vivo and in vitro assays show that Doublecortin stabilizes microtubules and causes bundling. Doublecortin is a basic protein with an iso-electric point of 10 typical of microtubule-binding proteins. Knock out mouse In mice where the Doublecortin gene has been knocked out, cortical layers are still correctly formed. However, the hippocampi of these mice show disorganisation in the CA3 region. The normally single layer of pyramidal cells in mutants is seen as a double layer. These mice also have different behavior than their wild type littermates and are epileptic. Structure The detailed sequence analysis of Doublecortin and Doublecortin-like proteins allowed the identification of a tandem repeat of evolutionarily conserved Doublecortin (DC) domains. These domains are found in the N terminus of proteins and consists of tandemly repeated copies of an around 80 amino acids region. It has been suggested that the first DC domain of Doublecortin binds tubulin and enhances microtubule polymerisation. Doublecortin has been shown to influence the structure of microtubules. Microtubule nucleated in vitro in the presence of Doublecortin have almost exclusively 13 protofilaments, whereas microtubule nucleated without Doublecortin are present in a range of different sizes. Interactions Doublecortin has been shown to interact with PAFAH1B1. Clinical significance Doublecortin is mutated in X-linked lissencephaly and the double cortex syndrome, and the clinical manifestations are sex-linked. In males, X-linked lissencephaly produces a smooth brain due to lack of migration of immature neurons, which normally promote folding of the brain surface. Double cortex syndrome is characterized by abnormal migration of neural tissue during development which results in two bands of misplaced neurons within the subcortical white, generating two cortices, giving the name to the syndrome; this finding generally occurs in females. The mutation was discovered by Joseph Gleeson and Christopher A. Walsh in Boston. At least 49 disease-causing mutations in this gene have been discovered. See also Lissencephaly References Further reading External links GeneReviews/NCBI/NIH/UW entry on DCX-Related Disorders OMIM entries on DCX-Related Disorders Protein families Proteins
Doublecortin
[ "Chemistry", "Biology" ]
708
[ "Biomolecules by chemical classification", "Protein classification", "Proteins", "Molecular biology", "Protein families" ]
14,526,771
https://en.wikipedia.org/wiki/Standards%20of%20Fundamental%20Astronomy
The Standards of Fundamental Astronomy (SOFA) software libraries are a collection of subroutines that implement official International Astronomical Union (IAU) algorithms for astronomical computations. As of February 2009 they are available in both Fortran and C source code format. Capabilities The subroutines in the libraries cover the following areas: Calendars Time scales Earth's rotation and sidereal time Ephemerides (limited precision) Precession, nutation, polar motion Proper motion Star catalog conversions Astrometric transformations Galactic Coordinates Licensing As of the February 2009 release, SOFA licensing changed to allow use for any purpose, provided certain requirements are met. Previously, commercial usage was specifically excluded and required written agreement of the SOFA board. See also Naval Observatory Vector Astrometry Subroutines References External links SOFA Home Page Scholarpedia overview of SOFA International Astronomical Union and Working group "Standards of Fundamental Astronomy Celestial mechanics Astronomical coordinate systems Numerical software Astronomy software
Standards of Fundamental Astronomy
[ "Physics", "Astronomy", "Mathematics" ]
189
[ "Classical mechanics stubs", "Works about astronomy", "Classical mechanics", "Astrophysics", "Astronomy stubs", "Astronomical coordinate systems", "Astrophysics stubs", "Astronomy software", "Numerical software", "Coordinate systems", "Celestial mechanics", "Mathematical software" ]
14,530,447
https://en.wikipedia.org/wiki/Infrared%20sensing%20in%20snakes
The ability to sense infrared thermal radiation evolved independently in three different groups of snakes, consisting of the families of Boidae (boas), Pythonidae (pythons), and the subfamily Crotalinae (pit vipers). What is commonly called a pit organ allows these animals to essentially "see" radiant heat at wavelengths between 5 and 30 μm. The more advanced infrared sense of pit vipers allows these animals to strike prey accurately even in the absence of light, and detect warm objects from several meters away. It was previously thought that the organs evolved primarily as prey detectors, but recent evidence suggests that it may also be used in thermoregulation and predator detection, making it a more general-purpose sensory organ than was supposed. Phylogeny and evolution The facial pit underwent parallel evolution in pitvipers and some boas and pythons. It evolved once in pitvipers and multiple times in boas and pythons. The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (loreal pits), while boas and pythons have three or more smaller pits lining the upper and sometimes the lower lip, in or between the scales (labial pits). Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Anatomy In pit vipers, the heat pit consists of a deep pocket in the rostrum with a membrane stretched across it. Behind the membrane, an air-filled chamber provides air contact on either side of the membrane. The pit membrane is highly vascular and heavily innervated with numerous heat-sensitive receptors formed from terminal masses of the trigeminal nerve (terminal nerve masses, or TNMs). The receptors are therefore not discrete cells, but a part of the trigeminal nerve itself. The labial pit found in boas and pythons lacks the suspended membrane and consists more simply of a pit lined with a membrane that is similarly innervated and vascular, though the morphology of the vasculature differs between these snakes and crotalines. The purpose of the vasculature, in addition to providing oxygen to the receptor terminals, is to rapidly cool the receptors to their thermo-neutral state after being heated by thermal radiation from a stimulus. Were it not for this vasculature, the receptor would remain in a warm state after being exposed to a warm stimulus, and would present the animal with afterimages even after the stimulus was removed. Neuroanatomy In all cases, the facial pit is innervated by the trigeminal nerve. In crotalines, information from the pit organ is relayed to the nucleus reticularus caloris in the medulla via the lateral descending trigeminal tract. From there, it is relayed to the contralateral optic tectum. In boas and pythons, information from the labial pit is sent directly to the contralateral optic tectum via the lateral descending trigeminal tract, bypassing the nucleus reticularus caloris. It is the optic tectum of the brain which eventually processes these infrared cues. This portion of the brain receives other sensory information as well, most notably optic stimulation, but also motor, proprioceptive and auditory. Some neurons in the tectum respond to visual or infrared stimulation alone; others respond more strongly to combined visual and infrared stimulation, and still others respond only to a combination of visual and infrared. Some neurons appear to be tuned to detect movement in one direction. It has been found that the snake's visual and infrared maps of the world are overlaid in the optic tectum. This combined information is relayed via the tectum to the forebrain. The nerve fibers in the pit organ are constantly firing at a very low rate. Objects that are within a neutral temperature range do not change the rate of firing; the neutral range is determined by the average thermal radiation of all objects in the receptive field of the organ. The thermal radiation above a given threshold causes an increase in the temperature of the nerve fiber, resulting in stimulation of the nerve and subsequent firing, with increased temperature resulting in increased firing rate. The sensitivity of the nerve fibers is estimated to be <0.001 °C. The pit organ will adapt to a repeated stimulus; if an adapted stimulus is removed, there will be a fluctuation in the opposite direction. For example, if a warm object is placed in front of the snake, the organ will increase in firing rate at first, but after a while will adapt to the warm object and the firing rate of the nerves in the pit organ will return to normal. If that warm object is then removed, the pit organ will now register the space that it used to occupy as being colder, and as such the firing rate will be depressed until it adapts to the removal of the object. The latency period of adaptation is approximately 50 to 150 ms. The facial pit actually visualizes thermal radiation using the same optical principles as a pinhole camera, wherein the location of a source of thermal radiation is determined by the location of the radiation on the membrane of the heat pit. However, studies that have visualized the thermal images seen by the facial pit using computer analysis have suggested that the resolution is extremely poor. The size of the opening of the pit results in poor resolution of small, warm objects, and coupled with the pit's small size and subsequent poor heat conduction, the image produced is of extremely low resolution and contrast. It is known that some focusing and sharpening of the image occurs in the lateral descending trigeminal tract, and it is possible that the visual and infrared integration that occurs in the tectum is also used to help sharpen the image. Molecular mechanism In spite of its detection of infrared light, the infrared detection mechanism is not similar to photoreceptors - while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is a type of transient receptor potential channel, TRPA1 which is a temperature sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than chemical reaction to light. In structure and function it resembles a biological version of warmth-sensing instrument called a bolometer. This is consistent with the very thin pit membrane, which would allow incoming infrared radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as the vascularization of the pit membrane in order to rapidly cool the ion channel back to its original temperature state. While the molecular precursors of this mechanism are found in other snakes, the protein is both expressed to a much lower degree and much less sensitive to heat. Behavioral and ecological implications Infrared sensing snakes use pit organs extensively to detect and target warm-blooded prey such as rodents and birds. Blind or blindfolded rattlesnakes can strike prey accurately in the complete absence of visible light, though it does not appear that they assess prey animals based on their body temperature. In addition, snakes may deliberately choose ambush sites that facilitate infrared detection of prey. It was previously assumed that the organ evolved specifically for prey capture. However, recent evidence suggests that the pit organ is also used for thermoregulation. In an experiment that tested snakes' abilities to locate a cool thermal refuge in an uncomfortably hot maze, all pit vipers were able to locate the refuge quickly and easily, while true vipers were unable to do so. This finding suggests that the pit vipers were using their pit organs to aid in thermoregulatory decisions. It is also possible that the organ even evolved as a defensive adaptation rather than a predatory one, or that multiple pressures have contributed to the organ's development. The use of the heat pit to direct thermoregulation or other behaviors in pythons and boas has not yet been determined. See also Crotalinae Infrared sensing in vampire bats Neuroethology Thermoception References External links Physorg article on Infrared vision in snakes Infrared vision in snakes summary article (archived 7/15/2013) Electromagnetic radiation Ethology Heat transfer Senses Snake anatomy Snakes
Infrared sensing in snakes
[ "Physics", "Chemistry", "Biology" ]
1,700
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Behavior", "Electromagnetic radiation", "Behavioural sciences", "Radiation", "Thermodynamics", "Ethology" ]
14,534,013
https://en.wikipedia.org/wiki/Sir%20William%20Dunn%20Institute%20of%20Biochemistry
The Sir William Dunn Institute of Biochemistry at Cambridge University was a research institute endowed from the estate of Sir William Dunn, which was the origin of the Cambridge Department of Biochemistry. Created for Frederick Gowland Hopkins on the recommendation of Walter Morley Fletcher, it opened in 1924 and spurred the growth of Hopkins's school of biochemistry. Hopkins's school dominated the discipline of biochemistry from the 1920s through the interwar years and was the source of many leaders of the next generation of biochemists, and the Dunn bequest inaugurated a period of rapid expansion for biochemistry. Origin of the Institute In 1918, a trustee of the estate of Sir William Dunn approached a Cambridge biologist, William Bate Hardy, about the possibility of putting some of Dunn's estate toward biomedical science research. Hardy referred the trustee (Charles D. Seligman) to Walter Morley Fletcher, the secretary of the Medical Research Council. The Dunn estate, like much of the philanthropy world, was beginning to look more to "preventive" philanthropy (as opposed to direct aid to the needy) by sponsoring research institutions that could address social ills. Between 1919 and 1925, Fletcher convinced the Dunn trustees to put nearly half a million pounds toward biomedical research. Fletcher was a long-time friend and institutional ally of Frederick Gowland Hopkins, a pioneering biochemist who was trying to establish "general biochemistry" as a field distinct from either medical physiology or organic chemistry, more a part of biology than medicine. Fletcher lobbied for the Dunn estate to fund Hopkins's proposal, among the over 500 funding proposals submitted. By late 1919, Fletcher was negotiating for a considerable endowment that would allow Hopkins to create an institute solely devoted to biochemistry. The approval of this endowment, ultimately about 210,000 pounds, reversed the declining fortunes of Hopkins's research group, which had been suffering from lack of available academic positions, research space, and able students since World War I. With funding in the works, Hopkins group expanded from 10 researchers in 1920 to 59 in 1925; in 1922 they began using endowment funds and in 1924 the Dunn Institute of Biochemistry opened. Hopkins became the first Sir William Dunn Professor of Biochemistry and head of the new University of Cambridge Department of Biochemistry, and he appointed researchers in a range of specialized fields covering the whole of what he considered the proper, broad domain of biochemistry. Hopkins's school of biochemistry Hopkins's school, housed in the Dunn Institute, was both productive and influential. Between World War I and World War II, 40% of the papers in the Biochemical Journal were authored by Hopkins and other Cambridge biochemists. Hopkins's program of "general biochemistry" was unique in having a stable institutional base (unlike in Germany, where there were only a scattered handful of biochemistry professorships) but not being dependent on a medical school (unlike the biochemistry and physiological chemistry departments in the United States). The Dunn Institute under Hopkins had another unusual feature for the time: Hopkins did not discriminate against hiring Jewish scientists, unlike the large majority of American, British and German universities and medical schools. This may have helped Hopkins assemble such a strong group of researchers, since talented Jewish biochemists had few other options. See also Dunn Human Nutrition Unit, another beneficiary of Dunn's will Notes References Kohler, Robert E. "Walter Fletcher, F. G. Hopkins, and the Dunn Institute of Biochemistry: A Case Study in the Patronage of Science". Isis, Vol. 69, No. 3 (1978), pp. 330–355. Kohler, Robert E. From medical chemistry to biochemistry: The making of a biomedical discipline. Cambridge, England: Cambridge University Press, 1982. Kornberg, Arthur. For the Love of Enzymes: The Odyssey of a Biochemist. Cambridge, Massachusetts: Harvard University Press, 1989. Biochemistry research institutes Biological research institutes in the United Kingdom Biochemistry, Sir William Dunn Institute of 1924 establishments in England
Sir William Dunn Institute of Biochemistry
[ "Chemistry" ]
791
[ "Biochemistry research institutes", "Biochemistry organizations" ]
9,377,615
https://en.wikipedia.org/wiki/Chromosome%20combing
Chromosome combing (also known as molecular combing or DNA combing) is a technique used to produce an array of uniformly stretched DNA that is then highly suitable for nucleic acid hybridization studies such as fluorescent in situ hybridisation (FISH) which benefit from the uniformity of stretching, the easy access to the hybridisation target sequences, and the resolution offered by the large distance between two probes, which is due to the stretching of the DNA by a factor of 1.5 times the crystallographic length of DNA. DNA in solution (i.e. with a randomly-coiled structure) is stretched by retracting the meniscus of the solution at a constant rate (typically 300 μm/s). The ends of DNA strands, which are thought to be frayed (i.e. open and exposing polar groups) bind to ionisable groups coating a silanized glass plate at a pH below the pKa of the ionizable groups (ensuring they are charged enough to interact with the ends of DNA). The rest of the DNA, which is mostly dsDNA, cannot form these interactions (aside from a few "touch down" segments along the length of the DNA strand) so is available for hybridisation to probes. As the meniscus retracts, surface retention creates a force that acts on DNA to retain it in the liquid phase; however this force is inferior to the strength of the DNA's attachment; the result is that the DNA is stretched as it enters the air phase; as the force acts in the locality of the air/liquid phase, it is invariant to different lengths or conformations of the DNA in solution, so DNA of any length will be stretched the same as the meniscus retracts. As this stretching is constant along the length of a DNA, distance along the strand can be related to base content; 1 μm is approximately equivalent to 2 kb. DNA regions of interest are observed by hybridising them with probes labelled by haptens like biotin; this can then be bound by one or more layers of fluorochrome-associated ligands (such as immunofluorescence antibodies). Multicolour tagging is also possible. This has several potential uses, typically as a high-resolution physical mapping technique (e.g. for positional cloning), an example of which was the correct mapping of 200 kb of the CAPN3 gene region, or the mapping of non-overlapping sequences (since the distance between two probes can be accurately measured). It is therefore useful for finding exons, microdeletions, amplifications, or rearrangements. Before the combing improvement, FISH was too low-res to be of use in this case. With this technique, the resolution of FISH is theoretically limited only by the resolution of the epifluorescence microscope; in practice, resolutions of around 2 μm are obtained, for DNA molecules usually 200–600 kb long (though combing-FISH has been used with some success on molecules in excess of 1 Mb long), and there may be room for improvement through optimisation. Since DNA analyses using this technique are single-molecule, genomes from different cells can be compared to find anomalies, with implications for diagnosis of cancer and other genetic alterations. Chromosome combing is also used to study DNA replication, a highly regulated process that is reliant on a specific program of temporal and spatial distribution of activation of origins of replication. Each origin occupies a distinct genetic locus and must fire only once per cell cycle. Chromosome combing allows a genome-wide view of the firing of origins and propagation of replication forks. As no assumptions are made about the sequence of the origins, this technique is particularly useful for mapping origins in eukaryotes, which are not thought to have precisely defined initiation sequences. Strategies involving combing recently replicated DNA typically involve incorporating modified nucleotides (such as BrdU, bromodeoxyuridine) into the nascent DNA, then fluorescently detecting it. As replication forks spread bidirectionally from origins of replication at (approximately) equal speeds, then origin position can be inferred. Replacing the modified nucleotide pool with a different type of modified nucleotide after a certain amount of time allows development of a time-resolved picture of the firing of sites, and the kinetics of replication forks. Pause sites can be identified, merged replication forks resolved, and the frequency of origin firings in different time periods to be studied. Firing frequencies have shown in in vitro studies of Xenopus laevis egg extract to increase as S phase progresses. In another study on Epstein-Barr virus episomes, hybridised probes were used to visualise the regional distribution of firing events; a particular zone showed preference for firing, whilst a few pause sites were also inferred. Chromosome combing is performed by the company Genomic Vision, based in Paris. References Biochemistry Genetics techniques
Chromosome combing
[ "Chemistry", "Engineering", "Biology" ]
1,022
[ "Genetics techniques", "Biochemistry", "Genetic engineering", "nan" ]
9,377,661
https://en.wikipedia.org/wiki/Support%20function
In mathematics, the support function hA of a non-empty closed convex set A in describes the (signed) distances of supporting hyperplanes of A from the origin. The support function is a convex function on . Any non-empty closed convex set A is uniquely determined by hA. Furthermore, the support function, as a function of the set A, is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition. Due to these properties, the support function is one of the most central basic concepts in convex geometry. Definition The support function of a non-empty closed convex set A in is given by ; see . Its interpretation is most intuitive when x is a unit vector: by definition, A is contained in the closed half space and there is at least one point of A in the boundary of this half space. The hyperplane H(x) is therefore called a supporting hyperplane with exterior (or outer) unit normal vector x. The word exterior is important here, as the orientation of x plays a role, the set H(x) is in general different from H(−x). Now hA(x) is the (signed) distance of H(x) from the origin. Examples The support function of a singleton A = {a} is . The support function of the Euclidean unit ball is where is the 2-norm. If A is a line segment through the origin with endpoints −a and a, then . Properties As a function of x The support function of a compact nonempty convex set is real valued and continuous, but if the set is closed and unbounded, its support function is extended real valued (it takes the value ). As any nonempty closed convex set is the intersection of its supporting half spaces, the function hA determines A uniquely. This can be used to describe certain geometric properties of convex sets analytically. For instance, a set A is point symmetric with respect to the origin if and only if hA is an even function. In general, the support function is not differentiable. However, directional derivatives exist and yield support functions of support sets. If A is compact and convex, and hA'(u;x) denotes the directional derivative of hA at u ≠ 0 in direction x, we have Here H(u) is the supporting hyperplane of A with exterior normal vector u, defined above. If A ∩ H(u) is a singleton {y}, say, it follows that the support function is differentiable at u and its gradient coincides with y. Conversely, if hA is differentiable at u, then A ∩ H(u) is a singleton. Hence hA is differentiable at all points u ≠ 0 if and only if A is strictly convex (the boundary of A does not contain any line segments). More generally, when is convex and closed then for any , where denotes the set of subgradients of at . It follows directly from its definition that the support function is positive homogeneous: and subadditive: It follows that hA is a convex function. It is crucial in convex geometry that these properties characterize support functions: Any positive homogeneous, convex, real valued function on is the support function of a nonempty compact convex set. Several proofs are known, one is using the fact that the Legendre transform of a positive homogeneous, convex, real valued function is the (convex) indicator function of a compact convex set. Many authors restrict the support function to the Euclidean unit sphere and consider it as a function on Sn-1. The homogeneity property shows that this restriction determines the support function on , as defined above. As a function of A The support functions of a dilated or translated set are closely related to the original set A: and The latter generalises to where A + B denotes the Minkowski sum: The Hausdorff distance of two nonempty compact convex sets A and B can be expressed in terms of support functions, where, on the right hand side, the uniform norm on the unit sphere is used. The properties of the support function as a function of the set A are sometimes summarized in saying that :A h A maps the family of non-empty compact convex sets to the cone of all real-valued continuous functions on the sphere whose positive homogeneous extension is convex. Abusing terminology slightly, is sometimes called linear, as it respects Minkowski addition, although it is not defined on a linear space, but rather on an (abstract) convex cone of nonempty compact convex sets. The mapping is an isometry between this cone, endowed with the Hausdorff metric, and a subcone of the family of continuous functions on Sn-1 with the uniform norm. Variants In contrast to the above, support functions are sometimes defined on the boundary of A rather than on Sn-1, under the assumption that there exists a unique exterior unit normal at each boundary point. Convexity is not needed for the definition. For an oriented regular surface, M, with a unit normal vector, N, defined everywhere on its surface, the support function is then defined by . In other words, for any , this support function gives the signed distance of the unique hyperplane that touches M in x. See also Barrier cone Supporting functional References Convex geometry Types of functions
Support function
[ "Mathematics" ]
1,081
[ "Mathematical objects", "Functions and mappings", "Types of functions", "Mathematical relations" ]
9,379,243
https://en.wikipedia.org/wiki/Bitfrost
Bitfrost is the security design specification for the OLPC XO, a low cost laptop intended for children in developing countries and developed by the One Laptop Per Child (OLPC) project. Bitfrost's main architect is Ivan Krstić. The first public specification was made available in February 2007. Bitfrost architecture Passwords No passwords are required to access or use the computer. System of rights Every program, when first installed, requests certain bundles of rights, for instance "accessing the camera", or "accessing the internet". The system keeps track of these rights, and the program is later executed in an environment which makes only the requested resources available. The implementation is not specified by Bitfrost, but dynamic creation of security contexts is required. The first implementation was based on vserver, the second and current implementation is based on user IDs and group IDs (/etc/password is edited when an activity is started), and a future implementation might involve SE Linux or some other technology. By default, the system denies certain combinations of rights; for instance, a program would not be granted both the right to access the camera and to access the internet. Anybody can write and distribute programs that request allowable right combinations. Programs that require normally unapproved right combinations need a cryptographic signature by some authority. The laptop's user can use the built-in security panel to grant additional rights to any application. Modifying the system The users can modify the laptop's operating system, a special version of Fedora Linux running the new Sugar graphical user interface and operating on top of Open Firmware. The original system remains available in the background and can be restored. By acquiring a developer key from a central location, a user may even modify the background copy of the system and many aspects of the BIOS. Such a developer key is only given out after a waiting period (so that theft of the machine can be reported in time) and is only valid for one particular machine. Theft-prevention leases The laptops request a new "lease" from a central network server once a day. These leases come with an expiry time (typically a month), and the laptop stops functioning if all its leases have expired. Leases can also be given out from local school servers or via a portable USB device. Laptops that have been registered as stolen cannot acquire a new lease. The deploying country decides whether this lease system is used and sets the lease expiry time. Microphone and camera The laptop's built-in camera and microphone are hard-wired to LEDs, so that the user always knows when they are operating. This cannot be switched off by software. Privacy concerns Len Sassaman, a computer security researcher at the Catholic University of Leuven in Belgium and his colleague Meredith Patterson at the University of Iowa in Iowa City claim that the Bitfrost system has inadvertently become a possible tool for unscrupulous governments or government agencies to definitively trace the source of digital information and communications that originated on the laptops. This is a potentially serious issue as many of the countries which have the laptops have governments with questionable human rights records. Notes The specification itself mentions that the name "Bitfrost" is a play on the Norse mythology concept of Bifröst, the bridge between the world of mortals and the realm of Gods. According to the Prose Edda, the bridge was built to be strong, yet it will eventually be broken; the bridge is an early recognition of the idea that there's no such thing as a perfect security system. See also CapDesk References External links Ivan Krstić's homepage OLPC Wiki: Bitfrost Bitfrost specification, version Draft-19 - release 1, 7 February 2007 High Security for $100 Laptop, Wired News, 7 February 2007 Making antivirus software obsolete - Technology Review magazine recognized Ivan Krstić, Bitfrost's main architect, as one of the world's top innovators under the age of 35 (Krstić was 21 at the time of publication) for his work on the system. One Laptop per Child Cryptographic software
Bitfrost
[ "Mathematics" ]
848
[ "Cryptographic software", "Mathematical software" ]
9,383,513
https://en.wikipedia.org/wiki/Pauli%20equation
In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-1/2 particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927. In its linearized form it is known as Lévy-Leblond equation. Equation For a particle of mass and electric charge , in an electromagnetic field described by the magnetic vector potential and the electric scalar potential , the Pauli equation reads: Here are the Pauli operators collected into a vector for convenience, and is the momentum operator in position representation. The state of the system, (written in Dirac notation), can be considered as a two-component spinor wavefunction, or a column vector (after choice of basis): . The Hamiltonian operator is a 2 × 2 matrix because of the Pauli operators. Substitution into the Schrödinger equation gives the Pauli equation. This Hamiltonian is similar to the classical Hamiltonian for a charged particle interacting with an electromagnetic field. See Lorentz force for details of this classical case. The kinetic energy term for a free particle in the absence of an electromagnetic field is just where is the kinetic momentum, while in the presence of an electromagnetic field it involves the minimal coupling , where now is the kinetic momentum and is the canonical momentum. The Pauli operators can be removed from the kinetic energy term using the Pauli vector identity: Note that unlike a vector, the differential operator has non-zero cross product with itself. This can be seen by considering the cross product applied to a scalar function : where is the magnetic field. For the full Pauli equation, one then obtains for which only a few analytic results are known, e.g., in the context of Landau quantization with homogenous magnetic fields or for an idealized, Coulomb-like, inhomogeneous magnetic field. Weak magnetic fields For the case of where the magnetic field is constant and homogenous, one may expand using the symmetric gauge , where is the position operator and A is now an operator. We obtain where is the particle angular momentum operator and we neglected terms in the magnetic field squared . Therefore, we obtain where is the spin of the particle. The factor 2 in front of the spin is known as the Dirac g-factor. The term in , is of the form which is the usual interaction between a magnetic moment and a magnetic field, like in the Zeeman effect. For an electron of charge in an isotropic constant magnetic field, one can further reduce the equation using the total angular momentum and Wigner-Eckart theorem. Thus we find where is the Bohr magneton and is the magnetic quantum number related to . The term is known as the Landé g-factor, and is given here by where is the orbital quantum number related to and is the total orbital quantum number related to . From Dirac equation The Pauli equation can be inferred from the non-relativistic limit of the Dirac equation, which is the relativistic quantum equation of motion for spin-1/2 particles. Derivation The Dirac equation can be written as: where and are two-component spinor, forming a bispinor. Using the following ansatz: with two new spinors , the equation becomes In the non-relativistic limit, and the kinetic and electrostatic energies are small with respect to the rest energy , leading to the Lévy-Leblond equation. Thus Inserted in the upper component of Dirac equation, we find Pauli equation (general form): From a Foldy–Wouthuysen transformation The rigorous derivation of the Pauli equation follows from Dirac equation in an external field and performing a Foldy–Wouthuysen transformation considering terms up to order . Similarly, higher order corrections to the Pauli equation can be determined giving rise to spin-orbit and Darwin interaction terms, when expanding up to order instead. Pauli coupling Pauli's equation is derived by requiring minimal coupling, which provides a g-factor g=2. Most elementary particles have anomalous g-factors, different from 2. In the domain of relativistic quantum field theory, one defines a non-minimal coupling, sometimes called Pauli coupling, in order to add an anomalous factor where is the four-momentum operator, is the electromagnetic four-potential, is proportional to the anomalous magnetic dipole moment, is the electromagnetic tensor, and are the Lorentzian spin matrices and the commutator of the gamma matrices . In the context of non-relativistic quantum mechanics, instead of working with the Schrödinger equation, Pauli coupling is equivalent to using the Pauli equation (or postulating Zeeman energy) for an arbitrary g-factor. See also Semiclassical physics Atomic, molecular, and optical physics Group contraction Gordon decomposition Footnotes References Books Eponymous equations of physics Quantum mechanics
Pauli equation
[ "Physics" ]
1,075
[ "Quantum mechanics", "Theoretical physics", "Eponymous equations of physics", "Equations of physics" ]
9,384,649
https://en.wikipedia.org/wiki/Europe%20PubMed%20Central
Europe PubMed Central (Europe PMC) is an open-access repository that contains millions of biomedical research works. It was known as UK PubMed Central until 1 November 2012. Service Europe PMC provides free access to more than 9.3 million full-text biomedical and life sciences research articles and over 43.3 million citations. Europe PMC contains some citation information and includes text mining based marked up text that links to external molecular and medical datasets. The Europe PMC funders group requires that articles describing the results of biomedical and life sciences research they have supported be made freely available in Europe PMC within 6 months of publication to maximise the impact of the work that they fund. The Grant Lookup facility allows users to search for information in a wide variety of different ways on over 101,900 grants awarded by the Europe PMC funders. Most content is mirrored from PubMed Central, which manages the deposit of entire books and journals. Additionally, Europe PMC offers a manuscript submission system, Europe PMC plus, which allows scientists to self-deposit their peer-reviewed research articles for inclusion in the Europe PMC collection. Organisation The Europe PMC project was originally launched in 2007 as the first 'mirror' site to PMC, which aims to provide international preservation of the open and free-access biomedical and life sciences literature. It forms part of a network of PMC International (PMCI) repositories that includes PubMed Central Canada. Europe PMC is not an exact "mirror" of the PMC database but has developed some different features. On 15 February 2013 CiteXplore was subsumed under Europe PubMed Central. The resource is managed and developed by the European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI), on behalf of an alliance of 27 biomedical and life sciences research funders, led by the Wellcome Trust. Europe PMC is supported by 27 organisations: Academy of Medical Sciences, Action on Hearing Loss, Alzheimer's Society, Arthritis Research UK, Austrian Science Fund (FWF), the Biotechnology and Biological Sciences Research Council, Blood Cancer UK, Breast Cancer Now, the British Heart Foundation, Cancer Research UK, the Chief Scientist Office of the Scottish Executive Health Department, Diabetes UK, the Department of Health, the Dunhill Medical Trust, the European Research Council, Marie Curie, the Medical Research Council, the Motor Neurone Disease Association, the Multiple Sclerosis Society, the Myrovlytis Trust, the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), Parkinson's UK, Prostate Cancer UK, Telethon Italy, the Wellcome Trust, the World Health Organization and Worldwide Cancer Research (formerly Association for International Cancer Research). See also List of academic databases and search engines MEDLINE PubMed Central Hyper Articles en Ligne Isidore (platform) References External links Fact-sheet Internet properties established in 2007 Bibliographic databases and indexes Biological databases Databases in Europe Full-text scholarly online databases Information technology organisations based in the United Kingdom Medical databases Medical research organizations Medical search engines Open-access archives Science and technology in Cambridgeshire South Cambridgeshire District
Europe PubMed Central
[ "Biology" ]
653
[ "Bioinformatics", "Biological databases" ]
9,386,543
https://en.wikipedia.org/wiki/Landau%E2%80%93Pomeranchuk%E2%80%93Migdal%20effect
In high-energy physics, the Landau–Pomeranchuk–Migdal effect, also known as the Landau–Pomeranchuk effect and the Pomeranchuk effect, or simply LPM effect, is a reduction of the bremsstrahlung and pair production cross sections at high energies or high matter densities. It is named in honor of Lev Landau, Isaak Pomeranchuk and Arkady Migdal. Overview A high energy particle undergoing multiple soft scatterings from a medium will experience interference effects between adjacent scattering sites. From uncertainty as the longitudinal momentum transfer gets small the particles wavelength will increase, if the wavelength becomes longer than the mean free path in the medium (the average distance between scattering sites) then the scatterings can no longer be treated as independent events, this is the LPM effect. The Bethe–Heitler spectrum for multiple scattering induced radiation assumes that the scatterings are independent, the quantum interference between successive scatterings caused by the LPM effect leads to suppression of the radiation spectrum relative to that predicted by Bethe–Heitler. The suppression occurs in different parts of the emission spectrum, for quantum electrodynamics (QED) small photon energies are suppressed, and for quantum chromodynamics (QCD) large gluon energies are suppressed. In QED the rescattering of the high energy electron dominates the process, in QCD the emitted gluons carry color charge and interact with the medium also. Since the gluons are soft their rescattering will provide the dominant modification to the spectrum. Lev Landau and Isaak Pomeranchuk showed that the formulas for bremsstrahlung and pair creation in matter which had been formulated by Hans Bethe and Walter Heitler (the Bethe–Heitler formula) were inapplicable at high energy or high matter density. The effect of multiple Coulomb scattering by neighboring atoms reduces the cross sections for pair production and bremsstrahlung. Arkady Migdal developed a formula applicable at high energies or high matter densities which accounted for these effects. In 1994 a team of physicists at SLAC National Accelerator Laboratory experimentally confirmed the Landau–Pomeranchuk–Migdal effect. References Bibliography Scattering theory Lev Landau
Landau–Pomeranchuk–Migdal effect
[ "Physics", "Chemistry" ]
466
[ "Particle physics stubs", "Scattering", "Scattering theory", "Particle physics" ]
9,387,033
https://en.wikipedia.org/wiki/Compound%20management
Compound management in the field of drug discovery refers to the systematic collection, storage, retrieval, and quality control of small molecule chemical compounds used in high-throughput screening and other research activities to identify hits that can be developed into candidate drugs. Drug discovery depends on methods by which many different chemicals are assayed for their activity. These chemicals are stored as physical quantities in a chemical library or libraries which are often assembled from both outside vendors and internal chemical synthesis efforts. These chemical libraries are used in high-throughput screening in the drug discovery hit to lead process. The chemical libraries in larger pharmaceutical companies are a critical part of the discovery process. These chemicals are stored in environmentally controlled conditions in small or large containers, often labeled with codes that pass back into a database. Each chemical in the storage bank must be monitored for shelf life, quantity, purity and other parameters, and its banked location. In some companies, the compounds can also include biological compounds, such as purified proteins or nucleic acids. The management of these chemical libraries, including renewal of outdated chemicals, databases containing the information, robotics often involved in fetching chemicals, and quality control of the storage environment is called Compound Management or Compound Control. Compound Management is often a significant expense, as well as career for one or more individuals who manage a chemical library at a research site. There are many books and journal articles devoted entirely or in part to compound management. It has become a critical technological component for high-throughput screening and chemical genomics. There are great challenges to be faced in the necessity of compound management, which are being surmounted by concerted efforts in the public and private domain. In 2008, authors at the National Institutes of Health's Chemical Genomics Center have released a paper showing the necessity of a highly automated, reliable and parallel compound management platform, in order to serve over 200,000 different compounds. In short, Compound Management requires inventory control of small molecules and biologics needed for assays and experiments, especially in high-throughput screening. It utilizes knowledge of chemistry, robotics, biology, and database management. The manager must also be acutely aware of safety standards in the handling and storing of radioactive, volatile, flammable and unstable compounds. Often, in large pharmaceutical companies, the chemical and biological compounds contained in compound libraries can number in the millions, making compound management and compound control important contributors to research and drug discovery. Outsourcing Because of the significant expenses and infrastructure required for accurate compound management (space requirements, robotics, IT support, analytical support, etc.) many companies choose to outsource this function to a company that specializes in this arena. It is important to work with a company that has significant experience in compound management due to the complexity of tracking not only inventory data, but also compound location, storage conditions, and compound integrity. This experience also is of paramount importance when knowing how to appropriately deal with the wide array of materials handled including, solids, liquids, volatile materials, sticky solids, oils, and gums as well as hazardous, flammable, hygroscopic and toxic compounds. Customers can specify not only the quantity of material but also the exact vial and cap or plate for their specific application. The service provides enormous savings from a time perspective as researchers do not spend their valuable time on weighing hundreds of compounds or getting them into the correct format for their assay. It also dramatically reduces disposal costs since the exact amount of material required can be ordered rather than needing to order e.g. 100 g of material when only 0.1 g is needed for the experiment. The high throughput analytical chemistry component of the company allows rapid validation that compounds are the correct material at the desired purity. While controlled storage conditions minimize degradation, customers may use this service to validate that the material they sent to outsourcing partner originally was correct and pure. Subsequently the service allows re-evaluation of compounds that may have decomposed during long term storage. The purification services complement the analytical services by allowing cost effective, environmentally friendly recovery of partially degraded reactive intermediates and HTS compounds at a fraction of the cost of synthesizing or purchasing these materials. Conferences There are several conferences related to compound management. The best known is Compound Management & Integrity although many chemistry and pharmaceutical conferences include talks or specific sections on the topic. References Drug discovery
Compound management
[ "Chemistry", "Biology" ]
880
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
9,387,775
https://en.wikipedia.org/wiki/K%C3%A1rm%C3%A1n%E2%80%93Howarth%20equation
In isotropic turbulence the Kármán–Howarth equation (after Theodore von Kármán and Leslie Howarth 1938), which is derived from the Navier–Stokes equations, is used to describe the evolution of non-dimensional longitudinal autocorrelation. Mathematical description Consider a two-point velocity correlation tensor for homogeneous turbulence For isotropic turbulence, this correlation tensor can be expressed in terms of two scalar functions, using the invariant theory of full rotation group, first derived by Howard P. Robertson in 1940, where is the root mean square turbulent velocity and are turbulent velocity in all three directions. Here, is the longitudinal correlation and is the lateral correlation of velocity at two different points. From continuity equation, we have Thus uniquely determines the two-point correlation function. Theodore von Kármán and Leslie Howarth derived the evolution equation for from Navier–Stokes equation as where uniquely determines the triple correlation tensor Loitsianskii's invariant L.G. Loitsianskii derived an integral invariant for the decay of the turbulence by taking the fourth moment of the Kármán–Howarth equation in 1939, i.e., If decays faster than as and also in this limit, if we assume that vanishes, we have the quantity, which is invariant. Lev Landau and Evgeny Lifshitz showed that this invariant is equivalent to conservation of angular momentum. However, Ian Proudman and W.H. Reid showed that this invariant does not hold always since is not in general zero, at least, in the initial period of the decay. In 1967, Philip Saffman showed that this integral depends on the initial conditions and the integral can diverge under certain conditions. Decay of turbulence For the viscosity dominated flows, during the decay of turbulence, the Kármán–Howarth equation reduces to a heat equation once the triple correlation tensor is neglected, i.e., With suitable boundary conditions, the solution to above equation is given by so that, See also Kármán–Howarth–Monin equation (Andrei Monin's anisotropic generalization of the Kármán–Howarth relation) Batchelor–Chandrasekhar equation (homogeneous axisymmetric turbulence) Corrsin equation (Kármán–Howarth relation for scalar transport equation) Chandrasekhar invariant (density fluctuation invariant in isotropic homogeneous turbulence) References Equations of fluid dynamics Fluid dynamics Turbulence
Kármán–Howarth equation
[ "Physics", "Chemistry", "Engineering" ]
491
[ "Equations of fluid dynamics", "Turbulence", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
9,387,972
https://en.wikipedia.org/wiki/PH-sensitive%20polymers
pH sensitive or pH responsive polymers are materials which will respond to the changes in the pH of the surrounding medium by varying their dimensions. Materials may swell, collapse, or change depending on the pH of their environment. This behavior is exhibited due to the presence of certain functional groups in the polymer chain. pH-sensitive materials can be either acidic or basic, responding to either basic or acidic pH values. These polymers can be designed with many different architectures for different applications. Key uses of pH sensitive polymers are controlled drug delivery systems, biomimetics, micromechanical systems, separation processes, and surface functionalization. Types pH sensitive polymers can be broken into two categories: those with acidic groups (such as -COOH and -SO3H) and those with basic groups (-NH2). The mechanism of response is the same for both, only the stimulus varies. The general form of the polymer is a backbone with functional "pendant groups" that hang off of it. When these functional groups become ionized in certain pH levels, they acquire a charge (+/-). Repulsions between like charges cause the polymers to change shape. Polyacids Polyacids, also known as anionic polymers, are polymers that have acidic groups. Examples of acidic functional groups include carboxylic acids (-COOH), sulfonic acids (-SO3H), phosphonic acids, and boronic acids. Polyacids accept protons at low pH values. At higher pH values, they deprotonate and become negatively charged. The negative charges create a repulsion that causes the polymer to swell. This swelling behavior is observed when the pH is greater than the pKa of the polymer. Examples include polymethyl methacrylate polymers (pharmacologyonline 1 (2011)152-164) and cellulose acetate phthalate. Polybases Polybases are the basic equivalent of polyacids and are also known as cationic polymers. They accept protons at low pH like polyacids do, but they then become positively charged. In contrast, at higher pH values they are neutral. Swelling behavior is seen when the pH is less than the pKa of the polymer. Natural polymers Although many sources talk about synthetic pH sensitive polymers, natural polymers can also display pH-responsive behavior. Examples include chitosan, hyaluronic acid, alginic acid and dextran. Chitosan, a frequently used example, is cationic. Since DNA is negatively charged, DNA could be attached to chitosan as a way to deliver genes to cells. Alginic acid, on the other hand, is anionic. It is often evaluated as a calcium-salt for drug delivery applications(International journal of biological macromolecules 75 (2015) 409-17) . Natural polymers have appeal because they display good biocompatibility, which makes them useful for biomedical applications. However, a disadvantage to natural polymers is that researchers can have more control over the structure of synthetic polymers and so can design those polymers for specific applications. Multi-stimuli polymers Polymers can be designed to respond to more than one external stimulus, such as pH and temperature. Often, these polymers are structured as a copolymer where each polymer displays one type of response. Structure pH sensitive polymers have been created with linear block copolymer, star, branched, dendrimer, brush, and comb architectures. Polymers of different architectures will self-assemble into different structures. This self-assembly can occur due to the nature of the polymer and the solvent, or due to a change in pH. pH changes can also cause the larger structure to swell or deswell. For example, block copolymers often form micelles, as will star polymers and branched polymers. However, star and branched polymers can form rod or worm-shaped micelles rather than the typical spheres. Brush polymers are usually used for modifying surfaces since their structure doesn’t allow them to form a larger structure like a micelle. Response to change in pH Often, the response to different pH values is swelling or deswelling. For example, polyacids release protons to become negatively charged at high pH. Since polymer chains are often in close proximity to other parts of the same chain or to other chains, like-charged parts of the polymer repel each other. This repulsion leads to a swelling of the polymer. Polymers can also form micelles (spheres) in response to a change in pH. This behavior can occur with linear block copolymers. If the different blocks of the copolymer have different properties, they can form micelles with one type of block on the inside and one type on the outside. For example, in water the hydrophobic blocks of a copolymer could end up on the inside of a micelle, with hydrophilic blocks on the outside. Additionally, a change in pH could cause micelles to swap their inner and outer molecules depending on the properties of the polymers involved. Responses other than simply swelling and deswelling with a change in pH are possible as well. Researchers have created polymers that undergo a sol-gel transition (from a solution to a gel) with a change in pH, but which also change from being a stiff gel to a soft gel for certain pH values. Synthesis pH sensitive polymers can be synthesized using several common polymerization methods. Functional groups may need to be protected so that they do not react depending on the type of polymerization. The masking can be removed after polymerization so that they regain their pH-sensitive functionality. Living polymerization is often used for making pH sensitive polymers because molecular weight distribution of the final polymers can be controlled. Examples include group transfer polymerization (GTP), atom transfer radical polymerization (ATRP), and reversible addition-fragmentation chain transfer (RAFT). Graft copolymers are a popular type to synthesize because their structure is a backbone with branches. The composition of the branches can be changed to achieve different properties. Hydrogels can be produced using emulsion polymerization. Characterization Contact angle Several methods can be used to measure the contact angle of a water drop on the surface of a polymer. The contact angle value is used to quantify wettability or hydrophobicity of the polymer. Degree of swelling Equal to (swollen weight-deswelled weight)/deswelled weight *100% and determined by massing polymers before and after swelling. This indicates how much the polymer swelled upon a change in pH. pH critical point The pH at which a significant structural change in how the molecules are arranged is observed. This structural change does not involve breaking bonds, but rather a change in conformation. For example, a swelling/deswelling transition would constitute a reversible conformational change. The value of the pH critical point can be determined by examining swelling percentage as a function of pH. Researchers aim to design molecules that transition at a pH that matters for the given application. Surface changes Confocal microscopy, scanning electron microscopy, Raman spectroscopy, and atomic force microscopy are all used to determine how the surface of a polymer changes in response to pH. Applications Purification and separation pH sensitive polymers have been considered for use in membranes. A change in pH could change the ability of the polymer to let ions through, allowing it to act as a filter. Surface modification pH sensitive polymers have been used to modify the surfaces of materials. For example, they can be used to change the wettability of a surface. Biomedical use pH sensitive polymers have been used for drug delivery. For example, they can be used to release insulin in specific quantities. References Smart materials
PH-sensitive polymers
[ "Materials_science", "Engineering" ]
1,582
[ "Smart materials", "Materials science" ]
2,190,732
https://en.wikipedia.org/wiki/Tree%20%28descriptive%20set%20theory%29
In descriptive set theory, a tree on a set is a collection of finite sequences of elements of such that every prefix of a sequence in the collection also belongs to the collection. Definitions Trees The collection of all finite sequences of elements of a set is denoted . With this notation, a tree is a nonempty subset of , such that if is a sequence of length in , and if , then the shortened sequence also belongs to . In particular, choosing shows that the empty sequence belongs to every tree. Branches and bodies A branch through a tree is an infinite sequence of elements of , each of whose finite prefixes belongs to . The set of all branches through is denoted and called the body of the tree . A tree that has no branches is called wellfounded; a tree with at least one branch is illfounded. By Kőnig's lemma, a tree on a finite set with an infinite number of sequences must necessarily be illfounded. Terminal nodes A finite sequence that belongs to a tree is called a terminal node if it is not a prefix of a longer sequence in . Equivalently, is terminal if there is no element of such that that . A tree that does not have any terminal nodes is called pruned. Relation to other types of trees In graph theory, a rooted tree is a directed graph in which every vertex except for a special root vertex has exactly one outgoing edge, and in which the path formed by following these edges from any vertex eventually leads to the root vertex. If is a tree in the descriptive set theory sense, then it corresponds to a graph with one vertex for each sequence in , and an outgoing edge from each nonempty sequence that connects it to the shorter sequence formed by removing its last element. This graph is a tree in the graph-theoretic sense. The root of the tree is the empty sequence. In order theory, a different notion of a tree is used: an order-theoretic tree is a partially ordered set with one minimal element in which each element has a well-ordered set of predecessors. Every tree in descriptive set theory is also an order-theoretic tree, using a partial ordering in which two sequences and are ordered by if and only if is a proper prefix of . The empty sequence is the unique minimal element, and each element has a finite and well-ordered set of predecessors (the set of all of its prefixes). An order-theoretic tree may be represented by an isomorphic tree of sequences if and only if each of its elements has finite height (that is, a finite set of predecessors). Topology The set of infinite sequences over (denoted as ) may be given the product topology, treating X as a discrete space. In this topology, every closed subset of is of the form for some pruned tree . Namely, let consist of the set of finite prefixes of the infinite sequences in . Conversely, the body of every tree forms a closed set in this topology. Frequently trees on Cartesian products are considered. In this case, by convention, we consider only the subset of the product space, , containing only sequences whose even elements come from and odd elements come from (e.g., ). Elements in this subspace are identified in the natural way with a subset of the product of two spaces of sequences, (the subset for which the length of the first sequence is equal to or 1 more than the length of the second sequence). In this way we may identify with for over the product space. We may then form the projection of , . See also Laver tree, a type of tree used in set theory as part of a notion of forcing References Descriptive set theory Trees (set theory) Determinacy
Tree (descriptive set theory)
[ "Mathematics" ]
754
[ "Game theory", "Determinacy" ]
2,190,765
https://en.wikipedia.org/wiki/Internal%20set
In mathematical logic, in particular in model theory and nonstandard analysis, an internal set is a set that is a member of a model. The concept of internal sets is a tool in formulating the transfer principle, which concerns the logical relation between the properties of the real numbers R, and the properties of a larger field denoted *R called the hyperreal numbers. The field *R includes, in particular, infinitesimal ("infinitely small") numbers, providing a rigorous mathematical justification for their use. Roughly speaking, the idea is to express analysis over R in a suitable language of mathematical logic, and then point out that this language applies equally well to *R. This turns out to be possible because at the set-theoretic level, the propositions in such a language are interpreted to apply only to internal sets rather than to all sets (note that the term "language" is used in a loose sense in the above). Edward Nelson's internal set theory is an axiomatic approach to nonstandard analysis (see also Palmgren at constructive nonstandard analysis). Conventional infinitary accounts of nonstandard analysis also use the concept of internal sets. Internal sets in the ultrapower construction Relative to the ultrapower construction of the hyperreal numbers as equivalence classes of sequences of reals, an internal subset [An] of *R is one defined by a sequence of real sets , where a hyperreal is said to belong to the set if and only if the set of indices n such that , is a member of the ultrafilter used in the construction of *R. More generally, an internal entity is a member of the natural extension of a real entity. Thus, every element of *R is internal; a subset of *R is internal if and only if it is a member of the natural extension of the power set of R; etc. Internal subsets of the reals Every internal subset of *R that is a subset of (the embedded copy of) R is necessarily finite (see Theorem 3.9.1 Goldblatt, 1998). In other words, every internal infinite subset of the hyperreals necessarily contains nonstandard elements. See also Standard part function Superstructure (mathematics) References Goldblatt, Robert. Lectures on the hyperreals. An introduction to nonstandard analysis. Graduate Texts in Mathematics, 188. Springer-Verlag, New York, 1998. Nonstandard analysis
Internal set
[ "Mathematics" ]
498
[ "Mathematical objects", "Infinity", "Nonstandard analysis", "Mathematics of infinitesimals", "Model theory" ]
2,191,091
https://en.wikipedia.org/wiki/Post-perovskite
Post-perovskite (pPv) is a high-pressure phase of magnesium silicate (MgSiO3). It is composed of the prime oxide constituents of the Earth's rocky mantle (MgO and SiO2), and its pressure and temperature for stability imply that it is likely to occur in portions of the lowermost few hundred km of Earth's mantle. The post-perovskite phase has implications for the D′′ layer, which influences the convective mixing in the mantle responsible for plate tectonics. Post-perovskite has the same crystal structure as the synthetic solid compound CaIrO3, and is often referred to as the "CaIrO3-type phase of MgSiO3" in the literature. The crystal system of post-perovskite is orthorhombic, its space group is Cmcm, and its structure is a stacked SiO6-octahedral sheet along the b axis. The name "post-perovskite" derives from silicate perovskite, the stable phase of MgSiO3 throughout most of Earth's mantle, which has the perovskite structure. The prefix "post-" refers to the fact that it occurs after perovskite structured MgSiO3 as pressure increases (and historically, the progression of high pressure mineral physics). At upper mantle pressures, nearest Earth's surface, MgSiO3 persists as the silicate mineral enstatite, a pyroxene rock forming mineral found in igneous and metamorphic rocks of the crust. History The CaIrO3-type phase of MgSiO3 phase was discovered in 2004 using the laser-heated diamond anvil cell (LHDAC) technique by a group at the Tokyo Institute of Technology and, independently, by researchers from the Swiss Federal Institute of Technology (ETH Zurich) and Japan Agency for Marine-Earth Science and Technology who used a combination of quantum-mechanical simulations and LHDAC experiments. The TIT group's paper appeared in the journal Science. The ETH/JAM-EST collaborative paper and TIT group's second paper appeared two months later in the journal Nature. This simultaneous discovery was preceded by S. Ono's experimental discovery of a similar phase, possessing exactly the same structure, in Fe2O3. Importance in Earth's mantle Post-perovskite phase is stable above 120 GPa at 2500 K, and exhibits a positive Clapeyron slope such that the transformation pressure increases with temperature. Because these conditions correspond to a depth of about 2600 km and the D" seismic discontinuity occurs at similar depths, the perovskite to post-perovskite phase change is considered to be the origin of such seismic discontinuities in this region. Post-perovskite also holds great promise for mapping experimentally determined information regarding the temperatures and pressures of its transformation into direct information regarding temperature variations in the D" layer once the seismic discontinuities attributed to this transformation have been sufficiently mapped out. Such information can be used, for example, to: 1) better constrain the amount of heat leaving Earth's core 2) determine whether or not subducted slabs of oceanic lithosphere reach the base of the mantle 3) help delineate the degree of chemical heterogeneity in the lower mantle 4) find out whether or not the lowermost mantle is unstable to convective instabilities that result in upwelling hot thermal plumes of rock which rise up and possibly trace out volcanic hot spot tracks at Earth's surface. For these reasons the finding of the MgSiO3-post-perovskite phase transition is considered by many geophysicists to be the most important discovery in deep Earth science in several decades, and was only made possible by the concerted efforts of mineral physics scientists around the world as they sought to increase the range and quality of LHDAC experiments and as ab initio calculations attained predictive power. Physical properties The sheet structure of post-perovskite makes the compressibility of the b axis higher than that of the a or c axis. This anisotropy may yield the morphology of a platy crystal habit parallel to the (010) plane; the seismic anisotropy observed in the D" region might qualitatively (but not quantitatively) be explained by this characteristic. Theory predicted the (110) slip associated with particularly favorable stacking faults and confirmed by later experiments. Some theorists predicted other slip systems, which await experimental confirmation. In 2005 and 2006 Ono and Oganov published two papers predicting that post-perovskite should have high electrical conductivity, perhaps two orders of magnitude higher than perovskite's conductivity. In 2008 Hirose's group published an experimental report confirming this prediction. A highly conductive post-perovskite layer provides an explanation for the observed decadal variations of the length of day. Chemical properties Another potentially important effect that needs to be better characterized for the post-perovskite phase transition is the influence of other chemical components that are known to be present to some degree in Earth's lowermost mantle. The phase transition pressure (characterized by a two-phase loop in this system), was initially thought to decrease as the FeO content increases, but some recent experiments suggest the opposite. However, it is possible that the effect of Fe2O3 is more relevant as most of iron in post-perovskite is likely to be trivalent (ferric). Such components as Al2O3 or the more oxidized Fe2O3 also affect the phase transition pressure, and might have strong mutual interactions with one another. The influence of variable chemistry present in the Earth's lowermost mantle upon the post-perovskite phase transition raises the issue of both thermal and chemical modulation of its possible appearance (along with any associated discontinuities) in the D" layer. Summary Experimental and theoretical work on the perovskite/post-perovskite phase transition continues, while many important features of this phase transition remain ill-constrained. For example, the Clapeyron slope (characterized by the Clausius–Clapeyron relation) describing the increase in the pressure of the phase transition with increasing temperature is known to be relatively high in comparison to other solid-solid phase transitions in the Earth's mantle, however, the experimentally determined value varies from about 5 MPa/K to as high as 13 MPa/K. Ab initio calculations give a tighter range, between 7.5 MPa/K and 9.6 MPa/K, and are probably the most reliable estimates available today. The difference between experimental estimates arises primarily because different materials were used as pressure standards in Diamond Anvil Cell experiments. A well-characterized equation of state for the pressure standard, when combined with high energy synchrotron generated X-ray diffraction patterns of the pressure standard (which is mixed in with the experimental sample material), yields information on the pressure-temperature conditions of the experiment. However, as these extreme pressures and temperatures have not been sufficiently explored in experiments, the equations of state for many popular pressure standards are not yet well characterized and often yield different results. Another source of uncertainty in LHDAC experiments is the measurement of temperature from a sample's thermal radiation, which is required to obtain the pressure from the equation of state of the pressure standard. In laser-heated experiments at such high pressures (over 1 million atmospheres), the samples are necessarily small and numerous approximations (e.g., gray body) are required to obtain estimates of the temperature. See also Ferropericlase References External links A synthesis on the discovery of post-perovskite and its geological implications (in French) Petrology Silicate minerals High pressure science Perovskites
Post-perovskite
[ "Physics" ]
1,631
[ "High pressure science", "Applied and interdisciplinary physics" ]
2,191,185
https://en.wikipedia.org/wiki/Neutron%20cross%20section
In nuclear physics, the concept of a neutron cross section is used to express the likelihood of interaction between an incident neutron and a target nucleus. The neutron cross section σ can be defined as the area in cm2 for which the number of neutron-nuclei reactions taking place is equal to the product of the number of incident neutrons that would pass through the area and the number of target nuclei. In conjunction with the neutron flux, it enables the calculation of the reaction rate, for example to derive the thermal power of a nuclear power plant. The standard unit for measuring the cross section is the barn, which is equal to 10−28 m2 or 10−24 cm2. The larger the neutron cross section, the more likely a neutron will react with the nucleus. An isotope (or nuclide) can be classified according to its neutron cross section and how it reacts to an incident neutron. Nuclides that tend to absorb a neutron and either decay or keep the neutron in its nucleus are neutron absorbers and will have a capture cross section for that reaction. Isotopes that undergo fission are fissionable fuels and have a corresponding fission cross section. The remaining isotopes will simply scatter the neutron, and have a scatter cross section. Some isotopes, like uranium-238, have nonzero cross sections of all three. Isotopes which have a large scatter cross section and a low mass are good neutron moderators (see chart below). Nuclides which have a large absorption cross section are neutron poisons if they are neither fissile nor undergo decay. A poison that is purposely inserted into a nuclear reactor for controlling its reactivity in the long term and improve its shutdown margin is called a burnable poison. Parameters of interest The neutron cross section, and therefore the probability of an neutron-nucleus interaction, depends on: the target type (hydrogen, uranium...), the type of nuclear reaction (scattering, fission...). the incident particle energy, also called speed or temperature (thermal, fast...), and, to a lesser extent, of: its relative angle between the incident neutron and the target nuclide, the target nuclide temperature. Target type dependence The neutron cross section is defined for a given type of target particle. For example, the capture cross section of deuterium 2H is much smaller than that of common hydrogen 1H. This is the reason why some reactors use heavy water (in which most of the hydrogen is deuterium) instead of ordinary light water as moderator: fewer neutrons are lost by capture inside the medium, hence enabling the use of natural uranium instead of enriched uranium. This is the principle of a CANDU reactor. Type of reaction dependence The likelihood of interaction between an incident neutron and a target nuclide, independent of the type of reaction, is expressed with the help of the total cross section σT. However, it may be useful to know if the incoming particle bounces off the target (and therefore continue travelling after the interaction) or disappears after the reaction. For that reason, the scattering and absorption cross sections σS and σA are defined and the total cross section is simply the sum of the two partial cross sections: Absorption cross section If the neutron is absorbed when approaching the nuclide, the atomic nucleus moves up on the table of isotopes by one position. For instance, 235U becomes 236*U with the * indicating the nucleus is highly energized. This energy has to be released and the release can take place through any of several mechanisms. The simplest way for the release to occur is for the neutron to be ejected by the nucleus. If the neutron is emitted immediately, it acts the same as in other scattering events. The nucleus may emit gamma radiation. The nucleus may β− decay, where a neutron is converted into a proton, an electron and an electron-type antineutrino (the antiparticle of the neutrino) About 81% of the 236*U nuclei are so energized that they undergo fission, releasing the energy as kinetic motion of the fission fragments, also emitting between one and five free neutrons. Nuclei that undergo fission as their predominant decay method after neutron capture include 233U, 235U, 237U, 239Pu, 241Pu. Nuclei that predominantly absorb neutrons and then emit beta particle radiation lead to these isotopes, e.g., 232Th absorbs a neutron and becomes 233*Th, which beta decays to become 233Pa, which in turn beta decays to become 233U. Isotopes that undergo beta decay transmute from one element to another element. Those that undergo gamma or X-ray emission do not cause a change in element or isotope. Scattering cross-section The scattering cross-section can be further subdivided into coherent scattering and incoherent scattering, which is caused by the spin dependence of the scattering cross-section and, for a natural sample, presence of different isotopes of the same element in the sample. Because neutrons interact with the nuclear potential, the scattering cross-section varies for different isotopes of the element in question. A very prominent example is hydrogen and its isotope deuterium. The total cross-section for hydrogen is over 10 times that of deuterium, mostly due to the large incoherent scattering length of hydrogen. Some metals are rather transparent to neutrons, aluminum and zirconium being the two best examples of this. Incident particle energy dependence For a given target and reaction, the cross section is strongly dependent on the neutron speed. In the extreme case, the cross section can be, at low energies, either zero (the energy for which the cross section becomes significant is called threshold energy) or much larger than at high energies. Therefore, a cross section should be defined either at a given energy or should be averaged in an energy range (or group). As an example, the plot on the right shows that the fission cross section of uranium-235 is low at high neutron energies but becomes higher at low energies. Such physical constraints explain why most operational nuclear reactors use a neutron moderator to reduce the energy of the neutron and thus increase the probability of fission which is essential to produce energy and sustain the chain reaction. A simple estimation of energy dependence of any kind of cross section is provided by the Ramsauer model, which is based on the idea that the effective size of a neutron is proportional to the breadth of the probability density function of where the neutron is likely to be, which itself is proportional to the neutron's thermal de Broglie wavelength. Taking as the effective radius of the neutron, we can estimate the area of the circle in which neutrons hit the nuclei of effective radius as While the assumptions of this model are naive, it explains at least qualitatively the typical measured energy dependence of the neutron absorption cross section. For neutrons of wavelength much larger than typical radius of atomic nuclei (1–10 fm, E = 10–1000 keV) can be neglected. For these low energy neutrons (such as thermal neutrons) the cross section is inversely proportional to neutron velocity. This explains the advantage of using a neutron moderator in fission nuclear reactors. On the other hand, for very high energy neutrons (over 1 MeV), can be neglected, and the neutron cross section is approximately constant, determined just by the cross section of atomic nuclei. However, this simple model does not take into account so called neutron resonances, which strongly modify the neutron cross section in the energy range of 1 eV–10 keV, nor the threshold energy of some nuclear reactions. Target temperature dependence Cross sections are usually measured at 20 °C. To account for the dependence with temperature of the medium (viz. the target), the following formula is used: where σ is the cross section at temperature T, and σ0 the cross section at temperature T0 (T and T0 in kelvins). The energy is defined at the most likely energy and velocity of the neutron. The neutron population consists of a Maxwellian distribution, and hence the mean energy and velocity will be higher. Consequently, also a Maxwellian correction-term √π has to be included when calculating the cross-section Equation 38. Doppler broadening The Doppler broadening of neutron resonances is a very important phenomenon and improves nuclear reactor stability. The prompt temperature coefficient of most thermal reactors is negative, owing to the nuclear Doppler effect. Nuclei are located in atoms which are themselves in continual motion owing to their thermal energy (temperature). As a result of these thermal motions, neutrons impinging on a target appears to the nuclei in the target to have a continuous spread in energy. This, in turn, has an effect on the observed shape of resonance. The resonance becomes shorter and wider than when the nuclei are at rest. Although the shape of resonances changes with temperature, the total area under the resonance remains essentially constant. But this does not imply constant neutron absorption. Despite the constant area under resonance a resonance integral, which determines the absorption, increases with increasing target temperature. This, of course, decreases coefficient k (negative reactivity is inserted). Link to reaction rate and interpretation Imagine a spherical target (shown as the dashed grey and red circle in the figure) and a beam of particles (in blue) "flying" at speed v (vector in blue) in the direction of the target. We want to know how many particles impact it during time interval dt. To achieve it, the particles have to be in the green cylinder in the figure (volume V). The base of the cylinder is the geometrical cross section of the target perpendicular to the beam (surface σ in red) and its height the length travelled by the particles during dt (length v dt): Noting n the number of particles per unit volume, there are n V particles in the volume V, which will, per definition of V, undergo a reaction. Noting r the reaction rate onto one target, it gives: It follows directly from the definition of the neutron flux : Assuming that there is not one but N targets per unit volume, the reaction rate R per unit volume is: Knowing that the typical nuclear radius r is of the order of 10−12 cm, the expected nuclear cross section is of the order of π r2 or roughly 10−24 cm2 (thus justifying the definition of the barn). However, if measured experimentally ( σ = R / (Φ N) ), the experimental cross sections vary enormously. As an example, for slow neutrons absorbed by the (n, γ) reaction the cross section in some cases (xenon-135) is as much as 2,650,000 barns, while the cross sections for transmutations by gamma-ray absorption are in the neighborhood of 0.001 barn ( has more examples). The so-called nuclear cross section is consequently a purely conceptual quantity representing how big the nucleus should be to be consistent with this simple mechanical model. Continuous versus average cross section Cross sections depend strongly on the incoming particle speed. In the case of a beam with multiple particle speeds, the reaction rate R is integrated over the whole range of energy: Where σ(E) is the continuous cross section, Φ(E) the differential flux and N the target atom density. In order to obtain a formulation equivalent to the mono energetic case, an average cross section is defined: Where is the integral flux. Using the definition of the integral flux Φ and the average cross section σ, the same formulation as before is found: Microscopic versus macroscopic cross section Up to now, the cross section referred to in this article corresponds to the microscopic cross section σ. However, it is possible to define the macroscopic cross section Σ which corresponds to the total "equivalent area" of all target particles per unit volume: where N is the atomic density of the target. Therefore, since the cross section can be expressed in cm2 and the density in cm−3, the macroscopic cross section is usually expressed in cm−1. Using the equation derived above, the reaction rate R can be derived using only the neutron flux Φ and the macroscopic cross section Σ: Mean free path The mean free path λ of a random particle is the average length between two interactions. The total length L that non perturbed particles travel during a time interval dt in a volume dV is simply the product of the length l covered by each particle during this time with the number of particles N in this volume: Noting v the speed of the particles and n is the number of particles per unit volume: It follows: Using the definition of the neutron flux Φ It follows: This average length L is however valid only for unperturbed particles. To account for the interactions, L is divided by the total number of reactions R to obtain the average length between each collision λ: From : It follows: where λ is the mean free path and Σ is the macroscopic cross section. Within stars Because 8Li and 12Be form natural stopping points on the table of isotopes for hydrogen fusion, it is believed that all of the higher elements are formed in very hot stars where higher orders of fusion predominate. A star like the Sun produces energy by the fusion of simple 1H into 4He through a series of reactions. It is believed that when the inner core exhausts its 1H fuel, the Sun will contract, slightly increasing its core temperature until 4He can fuse and become the main fuel supply. Pure 4He fusion leads to 8Be, which decays back to 2 4He; therefore the 4He must fuse with isotopes either more or less massive than itself to result in an energy producing reaction. When 4He fuses with 2H or 3H, it forms stable isotopes 6Li and 7Li respectively. The higher order isotopes between 8Li and 12C are synthesized by similar reactions between hydrogen, helium, and lithium isotopes. Typical cross sections Some cross sections that are of importance in a nuclear reactor are given in the following table. The thermal cross-section is averaged using a Maxwellian spectrum. The fast cross section is averaged using the uranium-235 fission spectrum. The cross sections were taken from the JEFF-3.1.1 library using JANIS software. * negligible, less than 0.1% of the total cross section and below the Bragg scattering cutoff External links XSPlot an online nuclear cross section plotter Neutron scattering lengths and cross-sections Periodic Table of Elements: Sorted by Cross Section (Thermal Neutron Capture) References cross section Nuclear physics
Neutron cross section
[ "Physics" ]
2,970
[ "Nuclear physics" ]
2,191,513
https://en.wikipedia.org/wiki/Chinese%20number%20gestures
Chinese number gestures are a method to signify the natural numbers one through ten using one hand. This method may have been developed to bridge the many varieties of Chinese—for example, the numbers 4 () and 10 () are hard to distinguish in some dialects. Some suggest that it was also used by business people during bargaining (i.e., to convey a bid by feeling the hand gesture in a sleeve) when they wish for more privacy in a public place. These gestures are fully integrated into Chinese Sign Language. Methods While the five digits on one hand can easily express the numbers one through five, six through ten have special signs that can be used in commerce or day-to-day communication. The gestures are rough representations of the Chinese numeral characters they represent. The system varies in practice, especially for the representation of "7" to "10". Two of the systems are listed below: Six (六) The little finger and thumb are extended (the extended thumb indicating one set of 5); the other fingers are closed, sometimes with the palm facing the signer. Seven (七) Northern China: The fingertips are all touching, pointed upwards, or just the fingertips of the thumb and first two fingers (the most common method); another method is similar to the eight (described below) except that the little finger is also extended. Northern China: The index finger and middle finger point outward, with the thumb extended upwards (the extended thumb indicating one set of 5), sometimes with the palm facing the observer. Coastal southern China: The index finger points down with the thumb extended, mimicking the shape of a "7". Eight (八) Northern China: The thumb and index finger make an "L" and the other fingers are closed, with the palm facing the observer. Northern China: The index finger and middle finger point down and with the fingertips optionally touching a horizontal surface, making the Chinese number 8 ("八"). Coastal southern China: The thumb, index finger, and middle finger are extended. Nine (九) Mainland China: The index finger makes a hook and the other fingers are closed, sometimes with the palm facing the signer. Taiwan: Four of the five digits of the hand are extended, the exception being the little finger. Hong Kong: Both methods are used. Ten (十) The fist is closed with the palm facing the signer, or the middle finger crosses an extended index finger, facing the observer. Some Chinese distinguish between zero and ten by having the thumb closed or open, respectively. The arms are raised and the index fingers of both hands are crossed in a "十" (making the Chinese number ten) with the palms facing in opposite directions, optionally with the hands placed in front of the signer's face. Use of the signs corresponds to the use of numbers in the Chinese language. For instance, the sign for five just as easily means fifty. A two followed by a six, using a single hand only, could mean 260 or 2600 etc. besides twenty-six. These signs also commonly refer to days of the week, starting from Monday, as well as months of the year, whose names in Chinese are enumerations. In different regions signs for numbers vary significantly. One may interpret the "8" sign as a "7". The "index finger-hook" symbol for 9, also means "death" in other contexts. The numbers zero through five are simpler: Zero (〇) Northern China: The fist is closed. This may be interpreted as 10 depending on the situation, though some Chinese distinguish between zero and ten by having the thumb closed or open, respectively. Coastal southern China: The thumb and index finger make a circle, with the other three fingers closed. One (一) The index finger is extended. Two (二) The index and middle fingers are extended. Three (三) The thumb and index finger are closed and the other three fingers are extended. The thumb holds the little finger down in the palm and the middle three fingers are extended. Four (四) The thumb is held in the palm and the four fingers are extended. Five (五) All five digits are extended. Only the thumb is extended (either upwards or outwards) with the palm facing the signer. Counting with fingers is often different from expressing a specific number with a finger gesture. When counting, the palm can be either facing its owner or the audience, depending on the purpose. Before counting, all fingers are closed; counting starts by extending the thumb as the first, then the index finger as the second, until all fingers are extended as the fifth; then counting can be continued by folding fingers with the same sequence, from thumb through the little finger, for counting from the sixth through the tenth. Repeating the same method for counting larger numbers. One can also starts counting with all fingers extended. Some believe that for formal scenario such as giving speech or presentation, counting with the palm facing the audience and starting with all fingers extended is more polite, since the gesture of folding of fingers representing bowing. When playing drinking finger games (划拳, 猜拳), slightly different sets of finger gestures of numbers is used. One of them is: Zero (〇) The fist is closed. One (一) The thumb is extended with all other fingers folded toward the palm. Two (二) The thumb and index finger make an "L", other fingers closed. Three (三) With the last two fingers closed and the rest fingers (the thumb and the first two fingers) extended, or With the index finger and thumb closed, the last three fingers are extended. Four (四) The thumb is held in palm with the four fingers extended. Five (五) All five digits are extended. Gallery From 1 to 5 From 6 to 10 in North China From 6 to 10 in coastal South China From 6 to 10 in Taiwan The digit 0 The gesture of the digit 0 is used for showing numbers like 20, 30, 40, etc., where the left hand shows the tens digit and the right hand shows the digit 0. See also Chinese numerals Finger-counting Finger binary Hand signaling (open outcry) Nonverbal communication Numbers in Chinese culture Sign language References References Finger-counting Chinese language Chinese culture Numerals de:Chinesische Zahlschrift#Handzeichen zum Ausdruck chinesischer Zahlen
Chinese number gestures
[ "Mathematics" ]
1,307
[ "Numeral systems", "Numerals", "Finger-counting" ]
2,191,629
https://en.wikipedia.org/wiki/Atomix%20%28video%20game%29
Atomix is a puzzle video game developed by Günter Krämer (as "Softtouch") and published by Thalion Software, released for the Amiga and other personal computers in late 1990. The object of the game is to assemble molecules from compound atoms by moving the atoms on a two-dimensional playfield. Atomix was received positively; reviewers noted the game's addictiveness and enjoyable gameplay, though criticized its repetitiveness. Gameplay Atomix takes place on a playfield consisting of a number of walls, with the atoms scattered throughout. The player is tasked with assembling a molecule from the atoms. The atoms must be arranged to exactly match the molecule displayed on the left side of the screen. The player can choose an atom and move it in any of the four cardinal directions. A moved atom keeps sliding in one direction until it hits a wall or another atom. Solving the puzzles requires strategic planning in moving the atoms, and on later levels with little free space, even finding room for the completed molecule can be a problem. Once the molecule is assembled, the player is given a score; the faster the puzzle was completed, the higher the score. Each puzzle must be completed within a time limit. A portion of the player's score can be spent to restart a failed puzzle. The entire game consists of 30 puzzles of increasing difficulty. In addition, after every five puzzles, there is a bonus level where the player must move laboratory flasks filled with various amounts of liquid to arrange them from empty to full. The game also offers a two-player mode, where two players work on the same puzzle; they take turns which last up to thirty seconds. Development Amiga Format reviewed a pre-release version in its May, 1990 issue. It was almost a complete version of the game although it lacked sound. Initially the game was released for Amiga, Atari ST and the IBM PC; as of May 1990, the C64 version was not yet planned, and was only released a few months later. A ZX Spectrum version was also planned. It was to be distributed by U.S. Gold, but was never released. The game was published for Enterprise 128 in 2006, and this version was written by Zoltán Povázsay from Hungary. A clone for the Atari Jaguar called Atomic has been released in 2006, written by Sébastien Briais (AKA Seb from the Removers). A second version called Atomic Reloaded has been released in 2009. Reception Atomix received warm reactions from reviewers. They stated that it was highly enjoyable and addictive despite its high difficulty level. Reviewers also pointed out the possible educational application of the game. However, certain reviewers criticized the game for its repetitiveness and stated that it lacked replayability. Some reviewers also wrote about the game's unoriginality, noting similarities to earlier games, Xor and Leonardo. Graphics were generally considered adequate, though not spectacular; Zzap!64 called them "a bit dull and repetitive" and "simplistic, but slick and effective", while CU Amiga remarked that despite their simplicity, they "create a nice, tidy display". The soundtrack was found enjoyable, though the Commodore Format reviewer considered it annoyingly repetitive. Atomix has been the subject of scientific research in computational complexity theory. Like Sokoban, when generalized to puzzles of arbitrary sizes, the problem of determining whether an Atomix puzzle has a solution is PSPACE-complete. Some heuristic approaches have been considered. Legacy Several open source clones of Atomix exist: Atomiks, GNOME Atomix, KAtomic and WAtomic. References Notes Bibliography 1990 video games Amiga games Atari ST games Cancelled ZX Spectrum games Commodore 64 games DOS games Multiplayer and single-player video games PSPACE-complete problems Puzzle video games Thalion Software games Video games developed in Germany
Atomix (video game)
[ "Mathematics" ]
777
[ "PSPACE-complete problems", "Mathematical problems", "Computational problems" ]
2,193,888
https://en.wikipedia.org/wiki/Force-free%20magnetic%20field
In plasma physics, a force-free magnetic field is a magnetic field in which the Lorentz force is equal to zero and the magnetic pressure greatly exceeds the plasma pressure such that non-magnetic forces can be neglected. For a force-free field, the electric current density is either zero or parallel to the magnetic field. Definition When a magnetic field is approximated as force-free, all non-magnetic forces are neglected and the Lorentz force vanishes. For non-magnetic forces to be neglected, it is assumed that the ratio of the plasma pressure to the magnetic pressure—the plasma β—is much less than one, i.e., . With this assumption, magnetic pressure dominates over plasma pressure such that the latter can be ignored. It is also assumed that the magnetic pressure dominates over other non-magnetic forces, such as gravity, so that these forces can similarly be ignored. In SI units, the Lorentz force condition for a static magnetic field can be expressed as where is the current density and is the vacuum permeability. Alternatively, this can be written as These conditions are fulfilled when the current vanishes or is parallel to the magnetic field. Zero current density If the current density is identically zero, then the magnetic field is the gradient of a magnetic scalar potential : The substitution of this into results in Laplace's equation, which can often be readily solved, depending on the precise boundary conditions. In this case, the field is referred to as a potential field or vacuum magnetic field. Nonzero current density If the current density is not zero, then it must be parallel to the magnetic field, i.e., where is a scalar function known as the force-free parameter or force-free function. This implies that The force-free parameter can be a function of position but must be constant along field lines. Linear force-free field When the force-free parameter is constant everywhere, the field is called a linear force-free field (LFFF). A constant allows for the derivation of a vector Helmholtz equation by taking the curl of the nonzero current density equations above. Nonlinear force-free field When the force-free parameter depends on position, the field is called a nonlinear force-free field (NLFFF). In this case, the equations do not possess a general solution, and usually must be solved numerically. Physical examples In the Sun's upper chromosphere and lower corona, the plasma β can locally be of order 0.01 or lower allowing for the magnetic field to be approximated as force-free. See also Woltjer's theorem Chandrasekhar–Kendall function Magnetic helicity References Further reading Plasma theory and modeling
Force-free magnetic field
[ "Physics" ]
553
[ "Plasma theory and modeling", "Plasma physics" ]
2,194,893
https://en.wikipedia.org/wiki/Uranium%E2%80%93uranium%20dating
Uranium–uranium dating is a radiometric dating technique which compares two isotopes of uranium (U) in a sample: uranium-234 (234U) and uranium-238 (238U). It is one of several radiometric dating techniques exploiting the uranium radioactive decay series, in which 238U undergoes 14 alpha and beta decay events on the way to the stable isotope 206Pb. Other dating techniques using this decay series include uranium–thorium dating and uranium–lead dating. Uranium series 238U, with a half-life of about 4.5 billion years, decays to 234U through emission of an alpha particle to thorium-234 (234Th), which is comparatively unstable with a half-life of just 24 days. 234Th then decays through beta particle emission to protactinium-234 (234Pa). This decays with a half-life of 6.7 hours, again through emission of a beta particle, to 234U. This isotope has a half-life of about 245,000 years. The next decay product, thorium-230 (230Th), has a half-life of about 75,000 years and is used in the uranium-thorium technique. Although analytically simpler, in practice 234U/238U requires knowledge of the ratio at the time the material under study was formed and is generally used only for samples older than the ca. 450,000 year upper limit of the 230Th/238U technique. For those materials (principally marine carbonates) for which these conditions apply, it remains a superior technique. Unlike other radiometric dating techniques, those using the uranium decay series (except for those using the stable final isotopes 206Pb and 207Pb) compare the ratios of two radioactive unstable isotopes. This complicates calculations as both the parent and daughter isotopes decay over time into other isotopes. In theory, the 234U/238U technique can be useful in dating samples between about 10,000 and 2 million years Before Present (BP), or up to about eight times the half-life of 234U. As such, it provides a useful bridge in radiometric dating techniques between the ranges of 230Th/238U (accurate up to ca. 450,000 years) and U–Pb dating (accurate up to the age of the solar system, but problematic on samples younger than about 2 million years). See also Carbon dating Chronological dating References Radiometric dating Uranium
Uranium–uranium dating
[ "Chemistry" ]
505
[ "Radiometric dating", "Radioactivity" ]
2,194,910
https://en.wikipedia.org/wiki/4-Aminopyridine
4-Aminopyridine (4-AP) is an organic compound with the chemical formula . It is one of the three isomeric aminopyridines. It is used as a research tool in characterizing subtypes of the potassium channel. It has also been used as a drug, to manage some of the symptoms of multiple sclerosis, and is indicated for symptomatic improvement of walking in adults with several variations of the disease. It was undergoing Phase III clinical trials , and the U.S. Food and Drug Administration (FDA) approved the compound on January 22, 2010. Fampridine is also marketed as Ampyra (pronounced "am-PEER-ah," according to the maker's website) in the United States by Acorda Therapeutics and as Fampyra in the European Union, Canada, and Australia. In Canada, the medication has been approved for use by Health Canada since February 10, 2012. Applications In the laboratory, 4-AP is a useful pharmacological tool in studying various potassium conductances in physiology and biophysics. It is a relatively selective blocker of members of Kv1 (Shaker, KCNA) family of voltage-activated K+ channels. However, 4-AP has been shown to potentiate voltage-gated Ca2+ channel currents independent of effects on voltage-activated K+ channels. Convulsant activity 4-Aminopyridine is a potent convulsant and is used to generate seizures in animal models for the evaluation of antiseizure agents. Vertebrate pesticide 4-Aminopyridine is also used under the trade name Avitrol as 0.5% or 1% in bird control bait. It causes convulsions and, infrequently, death, depending on dosage. The manufacturer says the proper dose should cause epileptic-like convulsions which cause the poisoned birds to emit distress calls resulting in the flock leaving the site; if the dose was sub-lethal, the birds will recover after 4 or more hours without long-term ill effect. The amount of bait should be limited so that relatively few birds are poisoned, causing the remainder of the flock to be frightened away with a minimum of mortality. A lethal dose will usually cause death within an hour. The use of 4-aminopyridine in bird control has been criticized by the Humane Society of the United States. Medical use Fampridine has been used clinically in Lambert–Eaton myasthenic syndrome and multiple sclerosis. It acts by blocking voltage-gated potassium channels, prolonging action potentials and thereby increasing neurotransmitter release at the neuromuscular junction. The drug has been shown to reverse saxitoxin and tetrodotoxin toxicity in tissue and animal experiments. In calcium entry blocker overdose in humans, 4-aminopyridine can increase the cytosolic Ca2+ concentration very efficiently independent of the calcium channels. Multiple sclerosis Fampridine has been shown to improve visual function and motor skills and relieve fatigue in patients with multiple sclerosis (MS). However, the effect of the drug is strongly established for walking capacity only. Common side effects include dizziness, nervousness and nausea, and the incidence of adverse effects was shown to be less than 5% in all studies. 4-AP works as a potassium channel blocker. Strong potassium currents decrease action potential duration and amplitude, which increases the probability of conduction failure − a well documented characteristic of demyelinated axons. Potassium channel blockade has the effect of increasing axonal action potential propagation and improving the probability of synaptic vesicle release. A study has shown that 4-AP is a potent calcium channel activator and can improve synaptic and neuromuscular function by directly acting on the calcium channel beta subunit. MS patients treated with 4-AP exhibited a response rate of 29.5% to 80%. A long-term study (32 months) indicated that 80-90% of patients who initially responded to 4-AP exhibited long-term benefits. Although improving symptoms, 4-AP does not inhibit progression of MS. Another study, conducted in Brazil, showed that treatment based on fampridine was considered efficient in 70% of the patients. Spinal cord injury Spinal cord injury patients have also seen improvement with 4-AP therapy. These improvements include sensory, motor and pulmonary function, with a decrease in spasticity and pain. Tetrodotoxin poisoning Clinical studies have shown that 4-AP is capable of reversing the effects of tetrodotoxin poisoning in animals, however, its effectiveness as an antidote in humans has not yet been determined. Overdose Case reports have shown that overdoses with 4-AP can lead to paresthesias, seizures, and atrial fibrillation. Contraindications 4-aminopyridine is excreted by the kidneys. 4-AP should not be given to people with significant kidney disease (e.g., acute kidney injury or advanced chronic kidney disease) due to the higher risk of seizures with increased circulating levels of 4-AP. Branding The drug was originally intended, by Acorda Therapeutics, to have the brand name Amaya, however the name was changed to Ampyra to avoid potential confusion with other marketed pharmaceuticals. Four of Acorda's patents pertaining to Ampyra were invalidated in 2017 by the United States District Court for the District of Delaware and a fifth patent expired in 2018. Since then, generic alternatives have been developed for the U.S. market. The drug is marketed by Biogen Idec in Canada as Fampyra and as Dalstep in India by Sun Pharma. Research Parkinson's disease Dalfampridine completed Phase II clinical trials for Parkinson's disease in July 2014. See also 4-Dimethylaminopyridine, a popular laboratory reagent, is prepared directly from pyridine instead of via methylating this compound. Pyridine 4-Pyridylnicotinamide, useful as a ligand in coordination chemistry, is prepared by the reaction of this compound with nicotinoyl chloride. References Potassium channel blockers Orphan drugs Avicides X 4-Aminopyridines 4-Pyridyl compounds
4-Aminopyridine
[ "Chemistry", "Biology" ]
1,304
[ "Highly-toxic chemical substances", "Harmful chemical substances", "Biocides", "Avicides" ]
2,195,185
https://en.wikipedia.org/wiki/ATM%20serine/threonine%20kinase
ATM serine/threonine kinase or Ataxia-telangiectasia mutated, symbol ATM, is a serine/threonine protein kinase that is recruited and activated by DNA double-strand breaks (canonical pathway), oxidative stress, topoisomerase cleavage complexes, splicing intermediates, R-loops and in some cases by single-strand DNA breaks. It phosphorylates several key proteins that initiate activation of the DNA damage checkpoint, leading to cell cycle arrest, DNA repair or apoptosis. Several of these targets, including p53, CHK2, BRCA1, NBS1 and H2AX are tumor suppressors. In 1995, the gene was discovered by Yosef Shiloh who named its product ATM since he found that its mutations are responsible for the disorder ataxia–telangiectasia. In 1998, the Shiloh and Kastan laboratories independently showed that ATM is a protein kinase whose activity is enhanced by DNA damage. Throughout the cell cycle DNA is monitored for damage. Damages result from errors during replication, by-products of metabolism, general toxic drugs or ionizing radiation. The cell cycle has different DNA damage checkpoints, which inhibit the next or maintain the current cell cycle step. There are two main checkpoints, the G1/S and the G2/M, during the cell cycle, which preserve correct progression. ATM plays a role in cell cycle delay after DNA damage, especially after double-strand breaks (DSBs). ATM is recruited to sites of double strand breaks by DSB sensor proteins, such as the MRN complex. After being recruited, it phosphorylates NBS1, along other DSB repair proteins. These modified mediator proteins then amplify the DNA damage signal, and transduce the signals to downstream effectors such as CHK2 and p53. Structure The ATM gene codes for a 350 kDa protein consisting of 3056 amino acids. ATM belongs to the superfamily of phosphatidylinositol 3-kinase-related kinases (PIKKs). The PIKK superfamily comprises six Ser/Thr-protein kinases that show a sequence similarity to phosphatidylinositol 3-kinases (PI3Ks). This protein kinase family includes ATR (ATM- and RAD3-related), DNA-PKcs (DNA-dependent protein kinase catalytic subunit) and mTOR (mammalian target of rapamycin). Characteristic for ATM are five domains. These are from N-terminus to C-terminus the HEAT repeat domain, the FRAP-ATM-TRRAP (FAT) domain, the kinase domain (KD), the PIKK-regulatory domain (PRD) and the FAT-C-terminal (FATC) domain. The HEAT repeats directly bind to the C-terminus of NBS1. The FAT domain interacts with ATM's kinase domain to stabilize the C-terminus region of ATM itself. The KD domain resumes kinase activity, while the PRD and the FATC domain regulate it. The structure of ATM has been solved in several publications using cryo-EM. In the inactive form, the protein forms a homodimer. In the canonical pathway, ATM is activated by the MRN complex and autophosphorylation, forming active monomers capable of phosphorylating several hundred downstream targets. In the non-canonical pathway, e.g. through simulation by oxidative stress, the dimer can be activated by the formation of disulfide bonds. The entire N-terminal domain together with the FAT domain are adopt an α-helical structure, which was initially predicted by sequence analysis. This α-helical structure forms a tertiary structure, which has a curved, tubular shape present for example in the Huntingtin protein, which also contains HEAT repeats. FATC is the C-terminal domain with a length of about 30 amino acids. It is highly conserved and consists of an α-helix. Function A complex of the three proteins MRE11, RAD50 and NBS1 (XRS2 in yeast), called the MRN complex in humans, recruits ATM to double strand breaks (DSBs) and holds the two ends together. ATM directly interacts with the NBS1 subunit and phosphorylates the histone variant H2AX on Ser139. This phosphorylation generates binding sites for adaptor proteins with a BRCT domain. These adaptor proteins then recruit different factors including the effector protein kinase CHK2 and the tumor suppressor p53. The ATM-mediated DNA damage response consists of a rapid and a delayed response. The effector kinase CHK2 is phosphorylated and thereby activated by ATM. Activated CHK2 phosphorylates phosphatase CDC25A, which is degraded thereupon and can no longer dephosphorylate CDK1-cyclin B, resulting in cell-cycle arrest. If the DSB can not be repaired during this rapid response, ATM additionally phosphorylates MDM2 and p53 at Ser15. p53 is also phosphorylated by the effector kinase CHK2. These phosphorylation events lead to stabilization and activation of p53 and subsequent transcription of numerous p53 target genes including CDK inhibitor p21 which lead to long-term cell-cycle arrest or even apoptosis. The protein kinase ATM may also be involved in mitochondrial homeostasis, as a regulator of mitochondrial autophagy (mitophagy) whereby old, dysfunctional mitochondria are removed. Increased ATM activity also occurs in viral infection where ATM is activated early during dengue virus infection as part of autophagy induction and ER stress response. Regulation A functional MRN complex is required for ATM activation after DSBs. The complex functions upstream of ATM in mammalian cells and induces conformational changes that facilitate an increase in the affinity of ATM towards its substrates, such as CHK2 and p53. Inactive ATM is present in the cells without DSBs as dimers or multimers. Upon DNA damage, ATM autophosphorylates on residue Ser1981. This phosphorylation provokes dissociation of ATM dimers, which is followed by the release of active ATM monomers. Further autophosphorylation (of residues Ser367 and Ser1893) is required for normal activity of the ATM kinase. Activation of ATM by the MRN complex is preceded by at least two steps, i.e. recruitment of ATM to DSB ends by the mediator of DNA damage checkpoint protein 1 (MDC1) which binds to MRE11, and the subsequent stimulation of kinase activity with the NBS1 C-terminus. The three domains FAT, PRD and FATC are all involved in regulating the activity of the KD kinase domain. The FAT domain interacts with ATM's KD domain to stabilize the C-terminus region of ATM itself. The FATC domain is critical for kinase activity and highly sensitive to mutagenesis. It mediates protein-protein interaction for example with the histone acetyltransferase TIP60 (HIV-1 Tat interacting protein 60 kDa), which acetylates ATM on residue Lys3016. The acetylation occurs in the C-terminal half of the PRD domain and is required for ATM kinase activation and for its conversion into monomers. While deletion of the entire PRD domain abolishes the kinase activity of ATM, specific small deletions show no effect. Germline mutations and cancer risk People who carry a heterozygous ATM mutation have increased risk of mainly pancreatic cancer, prostate cancer, stomach cancer and invasive ductal carcinoma of the breast. Homozygous ATM mutation confers the disease ataxia–telangiectasia (AT), a rare human disease characterized by cerebellar degeneration, extreme cellular sensitivity to radiation and a predisposition to cancer. All AT patients contain mutations in the ATM gene. Most other AT-like disorders are defective in genes encoding the MRN protein complex. One feature of the ATM protein is its rapid increase in kinase activity immediately following double-strand break formation. The phenotypic manifestation of AT is due to the broad range of substrates for the ATM kinase, involving DNA repair, apoptosis, G1/S, intra-S checkpoint and G2/M checkpoints, gene regulation, translation initiation, and telomere maintenance. Therefore, a defect in ATM has severe consequences in repairing certain types of damage to DNA, and cancer may result from improper repair. AT patients have an increased risk for breast cancer that has been ascribed to ATM's interaction and phosphorylation of BRCA1 and its associated proteins following DNA damage. Somatic mutations in sporadic cancers Mutations in the ATM gene are found at relatively low frequencies in sporadic cancers. According to COSMIC, the Catalogue Of Somatic Mutations In Cancer, the frequencies with which heterozygous mutations in ATM are found in common cancers include 0.7% in 713 ovarian cancers, 0.9% in central nervous system cancers, 1.9% in 1,120 breast cancers, 2.1% in 847 kidney cancers, 4.6% in colon cancers, 7.2% among 1,040 lung cancers and 11.1% in 1790 hematopoietic and lymphoid tissue cancers. Certain kinds of leukemias and lymphomas, including mantle cell lymphoma, T-ALL, atypical B cell chronic lymphocytic leukemia, and T-PLL are also associated with ATM defects. A comprehensive literature search on ATM deficiency in pancreatic cancer, that captured 5,234 patients, estimated that the total prevalence of germline or somatic ATM mutations in pancreatic cancer was 6.4%. ATM mutations may serve as predictive biomarkers of response for certain therapies, since preclinical studies have found that ATM deficiency can sensitise some cancer types to ATR inhibition. Frequent epigenetic deficiencies of ATM in cancers ATM is one of the DNA repair genes frequently hypermethylated in its promoter region in various cancers (see table of such genes in Cancer epigenetics). The promoter methylation of ATM causes reduced protein or mRNA expression of ATM. More than 73% of brain tumors were found to be methylated in the ATM gene promoter and there was strong inverse correlation between ATM promoter methylation and its protein expression (p < 0.001). The ATM gene promoter was observed to be hypermethylated in 53% of small (impalpable) breast cancers and was hypermethylated in 78% of stage II or greater breast cancers with a highly significant correlation (P = 0.0006) between reduced ATM mRNA abundance and aberrant methylation of the ATM gene promoter. In non-small cell lung cancer (NSCLC), the ATM promoter methylation status of paired tumors and surrounding histologically uninvolved lung tissue was found to be 69% and 59%, respectively. However, in more advanced NSCLC the frequency of ATM promoter methylation was lower at 22%. The finding of ATM promoter methylation in surrounding histologically uninvolved lung tissue suggests that ATM deficiency may be present early in a field defect leading to progression to NSCLC. In squamous cell carcinoma of the head and neck, 42% of tumors displayed ATM promoter methylation. DNA damage appears to be the primary underlying cause of cancer, and deficiencies in DNA repair likely underlie many forms of cancer. If DNA repair is deficient, DNA damage tends to accumulate. Such excess DNA damage may increase mutational errors during DNA replication due to error-prone translesion synthesis. Excess DNA damage may also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations may give rise to cancer. The frequent epigenetic deficiency of ATM in a number of cancers likely contributed to the progression of those cancers. Meiosis ATM functions during meiotic prophase. The wild-type ATM gene is expressed at a four-fold increased level in human testes compared to somatic cells (such as skin fibroblasts). In both mice and humans, ATM deficiency results in female and male infertility. Deficient ATM expression causes severe meiotic disruption during prophase I. In addition, impaired ATM-mediated DNA DSB repair has been identified as a likely cause of aging of mouse and human oocytes. Expression of the ATM gene, as well as other key DSB repair genes, declines with age in mouse and human oocytes and this decline is paralleled by an increase of DSBs in primordial follicles. These findings indicate that ATM-mediated homologous recombinational repair is a crucial function of meiosis. Inhibitors Several ATM kinase inhibitors are currently known, some of which are already in clinical trials. One of the first discovered ATM inhibitors is caffeine with an IC50 of 0.2 mM and only a low selectivity within the PIKK family. Wortmannin is an irreversible inhibitor of ATM with no selectivity over other related PIKK and PI3K kinases. The most important group of inhibitors are compounds based on the 3-methyl-1,3-dihydro-2H-imidazo[4,5-c]quinolin-2-one scaffold. The first important representative is the inhibitor is Dactolisib (NVP-BEZ235), which was first published by Novartis as a selective mTOR/PI3K inhibitor. It was later shown to also inhibit other PIKK kinases such as ATM, DNA-PK and ATR. Various optimisation efforts by AstraZeneca (AZD0156, AZD1390), Merck (M4076) and Dimitrov et al. have led to highly active ATM inhibitors with greater potency. Interactions Ataxia telangiectasia mutated has been shown to interact with: Abl gene, BRCA1, Bloom syndrome protein, DNA-PKcs, FANCD2, MRE11A, Nibrin, P53, RAD17, RAD51, RBBP8, RHEB, RRM2B, SMC1A TERF1, and TP53BP1. Tefu The Tefu protein of Drosophila melanogaster is a structural and functional homolog of the human ATM protein. Tefu, like ATM, is required for DNA repair and normal levels of meiotic recombination in oocytes. See also Ataxia telangiectasia Ataxia telangiectasia and Rad3 related References Further reading External links https://web.archive.org/web/20060107000211/http://www.hprd.org/protein/06347 Drosophila telomere fusion - The Interactive Fly GeneReviews/NCBI/NIH/UW entry on Ataxia telangiectasia OMIM entries on Ataxia telangiectasia Proteins EC 2.7.11
ATM serine/threonine kinase
[ "Chemistry" ]
3,196
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
2,195,233
https://en.wikipedia.org/wiki/Patern%C3%B2%E2%80%93B%C3%BCchi%20reaction
The Paternò–Büchi reaction, named after Emanuele Paternò and George Büchi, who established its basic utility and form, is a photochemical reaction, specifically a 2+2 photocycloaddition, which forms four-membered oxetane rings from an excited carbonyl and reacting with an alkene. With substrates benzaldehyde and 2-methyl-2-butene the reaction product is a mixture of structural isomers: Another substrate set is benzaldehyde and furan or heteroaromatic ketones and fluorinated alkenes. The alternative strategy for the above reaction is called the Transposed Paternò−Büchi reaction. See also Aza Paternò−Büchi reaction - the aza-equivalent of the Paternò–Büchi reaction Enone–alkene cycloadditions - photochemical reaction of an enone with an alkene to give a cyclobutene ring unit References Photochemistry Organic reactions Name reactions Oxygen heterocycle forming reactions Coupling reactions
Paternò–Büchi reaction
[ "Chemistry" ]
221
[ "Coupling reactions", "Organic reactions", "Name reactions", "nan", "Ring forming reactions" ]
13,384,253
https://en.wikipedia.org/wiki/HAZMAT%20Class%201%20Explosives
Hazmat Class 1 are explosive materials which are any substance or article, including a device, which is designed to function by explosion or which, by chemical reaction within itself is able to function in a similar manner even if not designed to function by explosion. Class 1 consists of six 'divisions', that describes the potential hazard posed by the explosive. The division number is the second number after the decimal point on a placard. The classification has an additional layer, of categorization, known as 'compatibility groups', which breaks explosives in the same division into one of 13 groups, identified by a letter, which is used to separate incompatible explosives from each other. This letter also appears on the placard, following the number. The movement of class 1 materials is tightly regulated, especially for divisions 1.1 and 1.2, which represent some of the most dangerous explosives, with the greatest potential for destruction and loss of life. Regulations in the United States require drivers have and follow a pre-prepared route, not park the vehicle within of bridges, tunnels, a fire, or crowded places. The vehicle must be attended to by its driver at all times while its parked. Drivers are also required to carry the following paperwork and keep it in an accessible and easy to locate location: written emergency instructions, written route plan, a copy of Federal Motor Carrier Safety Regulations, Part 397 - Transport of Hazardous Materials; driving and parking rules. Some tunnels and bridges severely restrict or completely forbid vehicles carrying Class 1 cargoes. Divisions Placards Compatibility table Transportation segregation table Compatibility group table See also Dangerous goods Explosive Notes References Hazardous materials
HAZMAT Class 1 Explosives
[ "Physics", "Chemistry", "Technology" ]
330
[ "Materials", "Hazardous materials", "Matter" ]
13,384,613
https://en.wikipedia.org/wiki/Ultrasound%20research%20interface
An ultrasound research interface (URI) is a software tool loaded onto a diagnostic clinical ultrasound device which provides functionality beyond typical clinical modes of operation. A normal clinical ultrasound user only has access to the ultrasound data in its final processed form, typically a B-Mode image, in DICOM format. For reasons of device usability they also have limited access to the processing parameters that can be modified. A URI allows a researcher to achieve different results by either acquiring the image at various intervals through the processing chain, or changing the processing parameters. Typical B-mode receive processing chain A typical digital ultrasound processing chain for B-Mode imaging may look as follows: Multiple analog signals are acquired from the ultrasound transducer (the transmitter/receiver applied to the patient) Analog signals may pass through one or more analog notch filters and a variable-gain amplifier (VCA) Multiple analog-to-digital converters convert the analog radio frequency (RF) signal to a digital RF signal sampled at a predetermined rate (typical ranges are from 20MHz to 160MHz) and at a predetermined number of bits (typical ranges are from 10 bits to 16 bits) Beamforming is applied to individual RF signals by applying time delays and summations as a function of time and transformed into a single RF signal The RF signal is run through one or more digital FIR or IIR filters to extract the most interesting parts of the signal given the clinical operation The filtered RF signal runs through an envelope detector and is log compressed into a grayscale format Multiple signals processed in this way are lined up together and interpolated and rasterized into a readable image. Data access A URI may provide data access at many different stages of the processing chain, these include: Pre-beamformed digital RF data from individual channels Beamformed RF data Envelope detected data Interpolated image data Where many diagnostic ultrasound devices have Doppler imaging modes for measuring blood flow, the URI may also provide access to Doppler related signal data, which can include: Demodulated (I/Q) data FFT spectral data Autocorrelated velocity color Doppler data Tools A URI may include many different tools for enabling the researcher to make better use of the device and the data captured, some of these tools include: Custom MATLAB programs for reading and processing signal and image data Software Development Kits (SDKs) for communicating with the URI, signal processing and other specialized modes of operation available on the URI References Ultrasound Medical ultrasonography Medical physics
Ultrasound research interface
[ "Physics" ]
520
[ "Applied and interdisciplinary physics", "Medical physics" ]
13,387,024
https://en.wikipedia.org/wiki/Self-consolidating%20concrete
Self-consolidating concrete or self-compacting concrete (SCC) is a concrete mix which has a low yield stress, high deformability, good segregation resistance (prevents separation of particles in the mix), and moderate viscosity (necessary to ensure uniform suspension of solid particles during transportation, placement (without external compaction), and thereafter until the concrete sets). In everyday terms, when poured, SCC is an extremely fluid mix with the following distinctive practical features – it flows very easily within and around the formwork, can flow through obstructions and around corners ("passing ability"), is close to self-leveling (although not actually self-levelling), does not require vibration or tamping after pouring, and follows the shape and surface texture of a mold (or form) very closely once set. As a result, pouring SCC is also much less labor-intensive compared to standard concrete mixes. Once poured, SCC is usually similar to standard concrete in terms of its setting and curing time (gaining strength), and strength. SCC does not use a high proportion of water to become fluid – in fact SCC may contain less water than standard concretes. Instead, SCC gains its fluid properties from an unusually high proportion of fine aggregate, such as sand (typically 50%), combined with superplasticizers (additives that ensure particles disperse and do not settle in the fluid mix) and viscosity-enhancing admixtures (VEA). Ordinarily, concrete is a dense, viscous material when mixed, and when used in construction, requires the use of vibration or other techniques (known as compaction) to remove air bubbles (cavitation), and honeycomb-like holes, especially at the surfaces, where air has been trapped during pouring. This kind of air content (unlike that in aerated concrete) is not desired and weakens the concrete if left. However it is laborious and takes time to remove by vibration, and improper or inadequate vibration can lead to undetected problems later. Additionally some complex forms cannot easily be vibrated. Self-consolidating concrete is designed to avoid this problem, and not require compaction, therefore reducing labor, time, and a possible source of technical and quality control issues. SCC was conceptualized in 1986 by Prof. Okamura at Kochi University, Japan, at a time when skilled labor was in limited supply, causing difficulties in concrete-related industries. The first generation of SCC used in North America was characterized by the use of relatively high content of binder as well as high dosages of chemicals admixtures, usually superplasticizer to enhance flowability and stability. Such high-performance concrete had been used mostly in repair applications and for casting concrete in restricted areas. The first generation of SCC was therefore characterized and specified for specialized applications. SCC can be used for casting heavily reinforced sections, places where there can be no access to vibrators for compaction and in complex shapes of formwork which may otherwise be impossible to cast, giving a far superior surface than conventional concrete. The relatively high cost of material used in such concrete continues to hinder its widespread use in various segments of the construction industry, including commercial construction, however the productivity economics take over in achieving favorable performance benefits and works out to be economical in pre-cast industry. The incorporation of powder, including supplementary cementitious materials and filler, can increase the volume of the paste, hence enhancing deformability, and can also increase the cohesiveness of the paste and stability of the concrete. The reduction in cement content and increase in packing density of materials finer than 80 μm, like fly ash etc. can reduce the water-cement ratio, and the high-range water reducer (HRWR) demand. The reduction in free water can reduce the concentration of viscosity-enhancing admixture (VEA) necessary to ensure proper stability during casting and thereafter until the onset of hardening. It has been demonstrated that a total fine aggregate content ("fines", usually sand) of about 50% of total aggregate is appropriate in an SCC mix. There are many studies on different types of SCC which address its fresh properties, strength, durability and microstructural properties. Types of self-consolidating concrete include low-fines SCC (LF-SCC) and semi-flowable SCC (SF-SCC) etc. SCC can be produced using different industrial wastes as cement replacing materials. They can be used for pavement construction <2-6>. Reference: https://doi.org/10.1016/j.conbuildmat.2022.130036 Overview SCC is measured using the flow table test (slump-flow test) rather than the usual concrete slump test, as it is too fluid to keep its shape when the cone is removed. A typical SCC mix will have slump-flow of around 500 – 700 mm. SCC is weakened, not strengthened, by vibration. As vibration is not needed for compacting the mix, all that it achieves is to separate and segregate it. See also Concrete slump test Flow table test References 2. Low-fines self-consolidating concrete using rice husk ash for road pavement: An environment-friendly and sustainable approach https://doi.org/10.1016/j.conbuildmat.2022.130036 3. Kannur, B., Chore, H.S. Utilization of sugarcane bagasse ash as cement-replacing materials for concrete pavement: an overview. Innov. Infrastruct. Solut. 6, 184 (2021). https://doi.org/10.1007/s41062-021-00539-4 4.Strength and durability study of low-fines self-consolidating concrete as a pavement material using fly ash and bagasse ash Bhupati Kannur &H. S. Chore. https://doi.org/10.1080/19648189.2022.2140207 5.Bhupati Kannur, Hemant Sharad Chore. Semi-flowable self-consolidating concrete using industrial wastes for construction of rigid pavements in India: An overview. https://doi.org/10.1016/j.jtte.2023.01.001 6.B Kannur, HS Chore. Assessing Semiflowable Self-Consolidating Concrete with Sugarcane Bagasse Ash for Application in Rigid Pavement. Journal of Materials in Civil Engineering 35 (10), 04023358, 2023. https://doi.org/10.1061/JMCEE7.MTENG-16355 External links Proportioning of self-compacting concrete – the UCL method – paper summarizing common mixes, uses, choices of additives, properties, and extensive information on SCCs. Working With SCC Needn’t Be Hit or Miss – precast concrete makers' experience is SCC / what to do and not do. Concrete
Self-consolidating concrete
[ "Engineering" ]
1,497
[ "Structural engineering", "Concrete" ]
8,721,698
https://en.wikipedia.org/wiki/Resolvent%20set
In linear algebra and operator theory, the resolvent set of a linear operator is a set of complex numbers for which the operator is in some sense "well-behaved". The resolvent set plays an important role in the resolvent formalism. Definitions Let X be a Banach space and let be a linear operator with domain . Let id denote the identity operator on X. For any , let A complex number is said to be a regular value if the following three statements are true: is injective, that is, the corestriction of to its image has an inverse called the resolvent; is a bounded linear operator; is defined on a dense subspace of X, that is, has dense range. The resolvent set of L is the set of all regular values of L: The spectrum is the complement of the resolvent set and subject to a mutually singular spectral decomposition into the point spectrum (when condition 1 fails), the continuous spectrum (when condition 2 fails) and the residual spectrum (when condition 3 fails). If is a closed operator, then so is each , and condition 3 may be replaced by requiring that be surjective. Properties The resolvent set of a bounded linear operator L is an open set. More generally, the resolvent set of a densely defined closed unbounded operator is an open set. Notes References (See section 8.3) External links See also Resolvent formalism Spectrum (functional analysis) Decomposition of spectrum (functional analysis) Linear algebra Operator theory
Resolvent set
[ "Mathematics" ]
307
[ "Linear algebra", "Algebra" ]
8,722,051
https://en.wikipedia.org/wiki/Laplacian%20smoothing
Laplacian smoothing is an algorithm to smooth a polygonal mesh. For each vertex in a mesh, a new position is chosen based on local information (such as the position of neighbours) and the vertex is moved there. In the case that a mesh is topologically a rectangular grid (that is, each internal vertex is connected to four neighbours) then this operation produces the Laplacian of the mesh. More formally, the smoothing operation may be described per-vertex as: Where is the number of adjacent vertices to node , is the position of the -th adjacent vertex and is the new position for node . See also Tutte embedding, an embedding of a planar mesh in which each vertex is already at the average of its neighbours' positions References Mesh generation Geometry processing
Laplacian smoothing
[ "Physics", "Mathematics" ]
161
[ "Mesh generation", "Tessellation", "Geometry", "Geometry stubs", "Symmetry" ]
8,723,207
https://en.wikipedia.org/wiki/Stokesian%20dynamics
Stokesian dynamics is a solution technique for the Langevin equation, which is the relevant form of Newton's 2nd law for a Brownian particle. The method treats the suspended particles in a discrete sense while the continuum approximation remains valid for the surrounding fluid, i.e., the suspended particles are generally assumed to be significantly larger than the molecules of the solvent. The particles then interact through hydrodynamic forces transmitted via the continuum fluid, and when the particle Reynolds number is small, these forces are determined through the linear Stokes equations (hence the name of the method). In addition, the method can also resolve non-hydrodynamic forces, such as Brownian forces, arising from the fluctuating motion of the fluid, and interparticle or external forces. Stokesian Dynamics can thus be applied to a variety of problems, including sedimentation, diffusion and rheology, and it aims to provide the same level of understanding for multiphase particulate systems as molecular dynamics does for statistical properties of matter. For rigid particles of radius suspended in an incompressible Newtonian fluid of viscosity and density , the motion of the fluid is governed by the Navier–Stokes equations, while the motion of the particles is described by the coupled equation of motion: In the above equation is the particle translational/rotational velocity vector of dimension 6N. is the hydrodynamic force, i.e., force exerted by the fluid on the particle due to relative motion between them. is the stochastic Brownian force due to thermal motion of fluid particles. is the deterministic nonhydrodynamic force, which may be almost any form of interparticle or external force, e.g. electrostatic repulsion between like charged particles. Brownian dynamics is one of the popular techniques of solving the Langevin equation, but the hydrodynamic interaction in Brownian dynamics is highly simplified and normally includes only the isolated body resistance. On the other hand, Stokesian dynamics includes the many body hydrodynamic interactions. Hydrodynamic interaction is very important for non-equilibrium suspensions, like a sheared suspension, where it plays a vital role in its microstructure and hence its properties. Stokesian dynamics is used primarily for non-equilibrium suspensions where it has been shown to provide results which agree with experiments. Hydrodynamic interaction When the motion on the particle scale is such that the particle Reynolds number is small, the hydrodynamic force exerted on the particles in a suspension undergoing a bulk linear shear flow is: Here, is the velocity of the bulk shear flow evaluated at the particle center, is the symmetric part of the velocity-gradient tensor; and are the configuration-dependent resistance matrices that give the hydrodynamic force/torque on the particles due to their motion relative to the fluid () and due to the imposed shear flow (). Note that the subscripts on the matrices indicate the coupling between kinematic () and dynamic () quantities. One of the key features of Stokesian dynamics is its handling of the hydrodynamic interactions, which is fairly accurate without being computationally inhibitive (like boundary integral methods) for a large number of particles. Classical Stokesian dynamics requires operations where N is the number of particles in the system (usually a periodic box). Recent advances have reduced the computational cost to about Brownian force The stochastic or Brownian force arises from the thermal fluctuations in the fluid and is characterized by: The angle brackets denote an ensemble average, is the Boltzmann constant, is the absolute temperature and is the delta function. The amplitude of the correlation between the Brownian forces at time and at time results from the fluctuation-dissipation theorem for the N-body system. See also Immersed boundary methods Stochastic Eulerian Lagrangian methods References Statistical mechanics Equations Fluid mechanics
Stokesian dynamics
[ "Physics", "Mathematics", "Engineering" ]
802
[ "Mathematical objects", "Equations", "Civil engineering", "Statistical mechanics", "Fluid mechanics" ]
16,056,559
https://en.wikipedia.org/wiki/Christie%20G.%20Enke
Christie G. Enke is a United States academic chemist who made pioneering contributions to the field of analytical chemistry. Life and career Chris Enke was born in Minneapolis, Minnesota on July 8, 1933. His parents were Alvin Enke and Mae Nichols. He graduated from Central High School in Minneapolis in 1951. He received a BA degree from Principia College in 1955 and a PhD from the University of Illinois in 1959. His thesis, concerning the anodic formation of surface oxide films on platinum electrodes, was performed under the guidance of Herbert Laitinen. While at Illinois, he also worked with Howard Malmstadt to introduce a graduate lab and lecture course in the electronics of laboratory instrumentation. He is now Professor Emeriti of Chemistry at the University of New Mexico and Michigan State University. Prior to his move to the University of New Mexico in 1994, he was an instructor and assistant professor at Princeton (1959 –1966), then an associate professor and professor at Michigan State University. Education 1955 B.S. Principia College 1959 M.S. University of Illinois 1959 PhD University of Illinois Research and teaching Electroanalytical chemistry: Enke's early research in electrochemistry centered on high-speed charge transfer kinetic studies. He also pioneered the use of operational amplifiers in electroanalytical instrumentation and later, computer control. He is co-inventor of the bipolar pulse method for measuring electrolytic conductance. Teaching electronics to scientists: Howard Malmstadt and Enke wrote the pioneering work, Electronics for Scientists. Then Malmstadt, Stan Crouch, and Enke wrote eight more texts and lab books in the electronics of laboratory instrumentation. This same team developed and presented the hands-on ACS short course, Electronics for Laboratory Instrumentation beginning in 1979. Enke also wrote an introductory analytical chemistry text called The Art and Science of Chemical Analysis. Mass spectrometry: Enke, his graduate student, Rick Yost, and a colleague, James Morrison, discovered low-energy collisional ion fragmentation in 1979. Collisional dissociation in an RF-only quadrupole mass filter between two quadrupole mass analyzers resulted in the first triple quadrupole mass spectrometer. Its low cost and unit resolution ushered in the technique now known as tandem mass spectrometry. Enke continued research in mass spectrometry including developing a distributed microprocessor control system for the triple-quadrupole, a fast integrating detector system for time-of-flight mass spectrometry, development of a tandem time-of-flight instrument with photofragmentation of ions, the equilibrium partition theory of electrospray ionization, and the invention of distance-of-flight mass spectrometry. Comprehensive analysis of complex mixtures: With Luc Nagels, Enke discovered that the concentrations of components in many natural complex mixtures have a log-normal distribution. With this information, one can learn the number and concentrations of components that are below the detection limit. Awards 1974 American Chemical Society Award for Chemical Instrumentation 1981 Fellow, American Association for the Advancement of Science 1989 American Chemical Society Award for Computers in Chemistry 1992 Michigan State University Distinguished Faculty Award 1993 Distinguished Contribution in Mass Spectrometry Award (shared with Richard Yost) 2003 J. Calvin Giddings Award for Excellence in Education from Analytical Division of the American Chemical Society 2011 American Chemical Society Award in Analytical Chemistry 2011 Fellow, American Chemical Society 2014 Distinguished Service in the Advancement of Analytical Chemistry Award from the Analytical Division of the American Chemical Society 2015 Eastern Analytical Symposium Award for Outstanding Achievements in the Fields of Analytical Chemistry Service Chair-elect, Chair and Past Chair, Analytical Division, American Chemical Society, 2004-2008 V.P. for Programs, President, Past President, American Society for Mass Spectrometry, 1992-1998 Program Chairman, Chairman, Div. of Computers in Chem., American Chemical Society, 1981-1985 Editorial Advisory Board, Analytical Chemistry, 1972-1974 Chair, Physical Electrochemistry Div., The Electrochemical. Soc.1963-1971 References 21st-century American chemists Mass spectrometrists 1933 births Living people Scientists from Illinois University of New Mexico faculty Principia College alumni University of Illinois alumni Place of birth missing (living people) Scientists from Minneapolis
Christie G. Enke
[ "Physics", "Chemistry" ]
874
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
16,058,443
https://en.wikipedia.org/wiki/Energy%20Resources%20Aotearoa
Energy Resources Aotearoa, formerly known as Petroleum Exploration and Production Association of New Zealand (PEPANZ) until March 2021, is an incorporated society based in Wellington which represents the wider energy resources sector, including the upstream oil and gas sector in New Zealand. They work with central and local government, stakeholders and the wider public. As part of this they hold events, publish educational booklets, make numerous submissions and run the social media campaign Energy Voices to promote use of natural gas. Members Full members include: Beach Energy New Zealand Oil and Gas OMV New Zealand Limited Todd Energy Associate members include service providers to the oil and gas industry in New Zealand (such as contractors, legal firms, engineers). Climate change Energy Resources Aotearoa says it supports the transition to lower emissions. As PEPANZ, the organisation was criticised for advocating increased use of fossil fuels, such as oil and natural gas. See also Energy in New Zealand Oil and gas industry in New Zealand References External links Oil and gas companies of New Zealand Petroleum organizations 1972 establishments in New Zealand
Energy Resources Aotearoa
[ "Chemistry", "Engineering" ]
215
[ "Petroleum", "Petroleum organizations", "Energy organizations" ]
16,059,132
https://en.wikipedia.org/wiki/Hilbert%20C%2A-module
Hilbert C*-modules are mathematical objects that generalise the notion of Hilbert spaces (which are themselves generalisations of Euclidean space), in that they endow a linear space with an "inner product" that takes values in a C*-algebra. They were first introduced in the work of Irving Kaplansky in 1953, which developed the theory for commutative, unital algebras (though Kaplansky observed that the assumption of a unit element was not "vital"). In the 1970s the theory was extended to non-commutative C*-algebras independently by William Lindall Paschke and Marc Rieffel, the latter in a paper that used Hilbert C*-modules to construct a theory of induced representations of C*-algebras. Hilbert C*-modules are crucial to Kasparov's formulation of KK-theory, and provide the right framework to extend the notion of Morita equivalence to C*-algebras. They can be viewed as the generalization of vector bundles to noncommutative C*-algebras and as such play an important role in noncommutative geometry, notably in C*-algebraic quantum group theory, and groupoid C*-algebras. Definitions Inner-product C*-modules Let be a C*-algebra (not assumed to be commutative or unital), its involution denoted by . An inner-product -module (or pre-Hilbert -module) is a complex linear space equipped with a compatible right -module structure, together with a map that satisfies the following properties: For all , , in , and , in : (i.e. the inner product is -linear in its second argument). For all , in , and in : For all , in : from which it follows that the inner product is conjugate linear in its first argument (i.e. it is a sesquilinear form). For all in : in the sense of being a positive element of A, and (An element of a C*-algebra is said to be positive if it is self-adjoint with non-negative spectrum.) Hilbert C*-modules An analogue to the Cauchy–Schwarz inequality holds for an inner-product -module : for , in . On the pre-Hilbert module , define a norm by The norm-completion of , still denoted by , is said to be a Hilbert -module or a Hilbert C*-module over the C*-algebra . The Cauchy–Schwarz inequality implies the inner product is jointly continuous in norm and can therefore be extended to the completion. The action of on is continuous: for all in Similarly, if is an approximate unit for (a net of self-adjoint elements of for which and tend to for each in ), then for in Whence it follows that is dense in , and when is unital. Let then the closure of is a two-sided ideal in . Two-sided ideals are C*-subalgebras and therefore possess approximate units. One can verify that is dense in . In the case when is dense in , is said to be full. This does not generally hold. Examples Hilbert spaces Since the complex numbers are a C*-algebra with an involution given by complex conjugation, a complex Hilbert space is a Hilbert -module under scalar multipliation by complex numbers and its inner product. Vector bundles If is a locally compact Hausdorff space and a vector bundle over with projection a Hermitian metric , then the space of continuous sections of is a Hilbert -module. Given sections of and the right action is defined by and the inner product is given by The converse holds as well: Every countably generated Hilbert C*-module over a commutative unital C*-algebra is isomorphic to the space of sections vanishing at infinity of a continuous field of Hilbert spaces over . C*-algebras Any C*-algebra is a Hilbert -module with the action given by right multiplication in and the inner product . By the C*-identity, the Hilbert module norm coincides with C*-norm on . The (algebraic) direct sum of copies of can be made into a Hilbert -module by defining If is a projection in the C*-algebra , then is also a Hilbert -module with the same inner product as the direct sum. The standard Hilbert module One may also consider the following subspace of elements in the countable direct product of Endowed with the obvious inner product (analogous to that of ), the resulting Hilbert -module is called the standard Hilbert module over . The fact that there is a unique separable Hilbert space has a generalization to Hilbert modules in the form of the Kasparov stabilization theorem, which states that if is a countably generated Hilbert -module, there is an isometric isomorphism Maps between Hilbert modules Let and be two Hilbert modules over the same C*-algebra . These are then Banach spaces, so it is possible to speak of the Banach space of bounded linear maps , normed by the operator norm. The adjointable and compact adjointable operators are subspaces of this Banach space defined using the inner product structures on and . In the special case where is these reduce to bounded and compact operators on Hilbert spaces respectively. Adjointable maps A map (not necessarily linear) is defined to be adjointable if there is another map , known as the adjoint of , such that for every and , Both and are then automatically linear and also -module maps. The closed graph theorem can be used to show that they are also bounded. Analogously to the adjoint of operators on Hilbert spaces, is unique (if it exists) and itself adjointable with adjoint . If is a second adjointable map, is adjointable with adjoint . The adjointable operators form a subspace of , which is complete in the operator norm. In the case , the space of adjointable operators from to itself is denoted , and is a C*-algebra. Compact adjointable maps Given and , the map is defined, analogously to the rank one operators of Hilbert spaces, to be This is adjointable with adjoint . The compact adjointable operators are defined to be the closed span of in . As with the bounded operators, is denoted . This is a (closed, two-sided) ideal of . C*-correspondences If and are C*-algebras, an C*-correspondence is a Hilbert -module equipped with a left action of by adjointable maps that is faithful. (NB: Some authors require the left action to be non-degenerate instead.) These objects are used in the formulation of Morita equivalence for C*-algebras, see applications in the construction of Toeplitz and Cuntz-Pimsner algebras, and can be employed to put the structure of a bicategory on the collection of C*-algebras. Tensor products and the bicategory of correspondences If is an and a correspondence, the algebraic tensor product of and as vector spaces inherits left and right - and -module structures respectively. It can also be endowed with the -valued sesquilinear form defined on pure tensors by This is positive semidefinite, and the Hausdorff completion of in the resulting seminorm is denoted . The left- and right-actions of and extend to make this an correspondence. The collection of C*-algebras can then be endowed with the structure of a bicategory, with C*-algebras as objects, correspondences as arrows , and isomorphisms of correspondences (bijective module maps that preserve inner products) as 2-arrows. Toeplitz algebra of a correspondence Given a C*-algebra , and an correspondence , its Toeplitz algebra is defined as the universal algebra for Toeplitz representations (defined below). The classical Toeplitz algebra can be recovered as a special case, and the Cuntz-Pimsner algebras are defined as particular quotients of Toeplitz algebras. In particular, graph algebras , crossed products by , and the Cuntz algebras are all quotients of specific Toeplitz algebras. Toeplitz representations A Toeplitz representation of in a C*-algebra is a pair of a linear map and a homomorphism such that is "isometric": for all , resembles a bimodule map: and for and . Toeplitz algebra The Toeplitz algebra is the universal Toeplitz representation. That is, there is a Toeplitz representation of in such that if is any Toeplitz representation of (in an arbitrary algebra ) there is a unique *-homomorphism such that and . Examples If is taken to be the algebra of complex numbers, and the vector space , endowed with the natural -bimodule structure, the corresponding Toeplitz algebra is the universal algebra generated by isometries with mutually orthogonal range projections. In particular, is the universal algebra generated by a single isometry, which is the classical Toeplitz algebra. See also Operator algebra Notes References External links Hilbert C*-Modules Home Page, a literature list C*-algebras Operator theory Theoretical physics
Hilbert C*-module
[ "Physics" ]
1,933
[ "Theoretical physics" ]
16,061,714
https://en.wikipedia.org/wiki/Catastrophin
Catastrophin (Catastrophe-related protein) is a term use to describe proteins that are associated with the disassembly of microtubules. Catastrophins affect microtubule shortening, a process known as microtubule catastrophe. Microtubule dynamics Microtubules are polymers of tubulin subunits arranged in cylindrical tubes. The subunit is made up of alpha and beta tubulin. GTP binds to alpha tubulin irreversibly. Beta tubulin binds GTP and hydrolyzes to GDP. It is the GDP bound to beta-tubulin that regulates the growth or disassembly of the microtubule. However, this GDP can be displaced by GTP. Beta-tubulin bounded to GTP are described as having a GTP-cap that enables stable growth. Microtubules exist in either a stable or unstable state. The unstable form of a microtubule is often found in cells that are undergoing rapid changes such as mitosis. The unstable form exists in a state of dynamic instability where the filaments grow and shrink seemingly randomly. A mechanistic understanding of what causes microtubules to shrink is still being developed. Model of catastrophe One model proposes that loss of the GTP-cap causes the GDP-containing protofilaments to shrink. Based on this GTP-cap model, catastrophe happens randomly. The model proposes that an increase in microtubule growth will correlate with a decrease in random catastrophe frequency or vice versa. The discovery of microtubule-associated proteins that change the rate of catastrophe while not impacting the rate of microtubule growth challenges this model of stochastic growth and shrinkage. Increases Oncoprotein 18/Stathmin has been shown to increase the frequency of catastrophe. Oncoprotein 18 (Op18) is a cytosolic protein that are found in abundance in either benign or malignant tumor site: through the complex timing of phosphorylation, this biomolecule regulates the depolymerization of microtubules. It has four sites of phosphorylation characterized by serine residues and are associated with cyclin-dependent protein kinases (CDKs): Ser16, Ser25, Ser38 and Ser63. There are two different models that are in contention regarding the destabilization of microtubules due to Op18: the inhibition of tubulin dimer formation or a catastrophe phenomena. The Kinesin-related protein XKCM1 stimulates catastrophes in Xenopus microtubules. The Kinesin-Related Protein 13 MCAK increases the frequency of catastrophe without affecting the promotion of microtubule growth. Decreases Doublecortin (DCX) shows an ability to inhibit catastrophe without affecting the microtubule growth rate Xenopus Microtubule Protein 215 (XMAP215) has been implicated in inhibiting catastrophe. Mechanism Some catastrophins affect catastrophe by binding to the ends of microtubules and promoting the dissociation of tubulin dimers. Different mathematical models of microtubule development are being developed to take into account in vitro and in vivo observations. Meanwhile, there are new in vitro models of microtubule polymerization dynamics, of which catastrophins take a part in, being tested to emulate in vivo behaviors of microtubules. See also Microtubule-associated protein Kinesin References Motor proteins
Catastrophin
[ "Chemistry" ]
705
[ "Molecular machines", "Motor proteins" ]
16,065,393
https://en.wikipedia.org/wiki/Green%20nanotechnology
Green nanotechnology refers to the use of nanotechnology to enhance the environmental sustainability of processes producing negative externalities. It also refers to the use of the products of nanotechnology to enhance sustainability. It includes making green nano-products and using nano-products in support of sustainability. The word GREEN in the name Green Nanotechnology has dual meaning. On one hand it describes the environment friendly technologies utilized to synthesize particles in nano scale; on the other hand it refers to the nanoparticles synthesis mediated by extracts of chlorophyllus plants. Green nanotechnology has been described as the development of clean technologies, "to minimize potential environmental and human health risks associated with the manufacture and use of nanotechnology products. It also encourages replacement of existing products with new nano-products that are more environmentally friendly throughout their lifecycle." Aim Green nanotechnology has two goals: producing nanomaterials and products without harming the environment or human health, and producing nano-products that provide solutions to environmental problems. It uses existing principles of green chemistry and green engineering to make nanomaterials and nano-products without toxic ingredients, at low temperatures using less energy and renewable inputs wherever possible, and using lifecycle thinking in all design and engineering stages. In addition to making nanomaterials and products with less impact to the environment, green nanotechnology also means using nanotechnology to make current manufacturing processes for non-nano materials and products more environmentally friendly. For example, nanoscale membranes can help separate desired chemical reaction products from waste materials from plants. Nanoscale catalysts can make chemical reactions more efficient and less wasteful. Sensors at the nanoscale can form a part of process control systems, working with nano-enabled information systems. Using alternative energy systems, made possible by nanotechnology, is another way to "green" manufacturing processes. The second goal of green nanotechnology involves developing products that benefit the environment either directly or indirectly. Nanomaterials or products directly can clean hazardous waste sites, desalinate water, treat pollutants, or sense and monitor environmental pollutants. Indirectly, lightweight nanocomposites for automobiles and other means of transportation could save fuel and reduce materials used for production; nanotechnology-enabled fuel cells and light-emitting diodes (LEDs) could reduce pollution from energy generation and help conserve fossil fuels; self-cleaning nanoscale surface coatings could reduce or eliminate many cleaning chemicals used in regular maintenance routines; and enhanced battery life could lead to less material use and less waste. Green Nanotechnology takes a broad systems view of nanomaterials and products, ensuring that unforeseen consequences are minimized and that impacts are anticipated throughout the full life cycle. Current research Solar cells Research is underway to use nanomaterials for purposes including more efficient solar cells, practical fuel cells, and environmentally friendly batteries. The most advanced nanotechnology projects related to energy are: storage, conversion, manufacturing improvements by reducing materials and process rates, energy saving (by better thermal insulation for example), and enhanced renewable energy sources. One major project that is being worked on is the development of nanotechnology in solar cells. Solar cells are more efficient as they get tinier and solar energy is a renewable resource. The price per watt of solar energy is lower than one dollar. Research is ongoing to use nanowires and other nanostructured materials with the hope of to create cheaper and more efficient solar cells than are possible with conventional planar silicon solar cells. Another example is the use of fuel cells powered by hydrogen, potentially using a catalyst consisting of carbon supported noble metal particles with diameters of 1–5 nm. Materials with small nanosized pores may be suitable for hydrogen storage. Nanotechnology may also find applications in batteries, where the use of nanomaterials may enable batteries with higher energy content or supercapacitors with a higher rate of recharging. Nanotechnology is already used to provide improved performance coatings for photovoltaic (PV) and solar thermal panels. Hydrophobic and self-cleaning properties combine to create more efficient solar panels, especially during inclement weather. PV covered with nanotechnology coatings are said to stay cleaner for longer to ensure maximum energy efficiency is maintained. Nanoremediation and water treatment Nanotechnology offers the potential of novel nanomaterials for the treatment of surface water, groundwater, wastewater, and other environmental materials contaminated by toxic metal ions, organic and inorganic solutes, and microorganisms. Due to their unique activity toward recalcitrant contaminants, many nanomaterials are under active research and development for use in the treatment of water and contaminated sites. The present market of nanotech-based technologies applied in water treatment consists of reverse osmosis(RO), nanofiltration, ultrafiltration membranes. Indeed, among emerging products one can name nanofiber filters, carbon nanotubes and various nanoparticles. Nanotechnology is expected to deal more efficiently with contaminants which convectional water treatment systems struggle to treat, including bacteria, viruses and heavy metals. This efficiency generally stems from the very high specific surface area of nanomaterials, which increases dissolution, reactivity and sorption of contaminants. Environmental remediation Nanoremediation is the use of nanoparticles for environmental remediation. Nanoremediation has been most widely used for groundwater treatment, with additional extensive research in wastewater treatment. Nanoremediation has also been tested for soil and sediment cleanup. Even more preliminary research is exploring the use of nanoparticles to remove toxic materials from gases. Some nanoremediation methods, particularly the use of nano zerovalent iron for groundwater cleanup, have been deployed at full-scale cleanup sites. Nanoremediation is an emerging industry; by 2009, nanoremediation technologies had been documented in at least 44 cleanup sites around the world, predominantly in the United States. During nanoremediation, a nanoparticle agent must be brought into contact with the target contaminant under conditions that allow a detoxifying or immobilizing reaction. This process typically involves a pump-and-treat process or in situ application. Other methods remain in research phases. Scientists have been researching the capabilities of buckminsterfullerene in controlling pollution, as it may be able to control certain chemical reactions. Buckminsterfullerene has been demonstrated as having the ability of inducing the protection of reactive oxygen species and causing lipid peroxidation. This material may allow for hydrogen fuel to be more accessible to consumers. Water cleaning technology In 2017 the RingwooditE Co Ltd was formed in order to explore Thermonuclear Trap Technology (TTT) for the purpose of cleaning all sources of water from pollution and toxic contents. This patented nanotechnology uses a high pressure and temperature chamber to separate isotopes that should by nature not be in drinking water to pure drinking water, as to the by the WHO´s established classification. This method has been developed by among others, by professor Vladimir Afanasiew, at the Moscow Nuclear Institution. This technology is targeted to clean Sea, river, lake and landfill waste waters. It even removes radioactive isotopes from the sea water, after Nuclear Power Stations catastrophes and cooling water plant towers. By this technology pharmaca rests are being removed as well as narcotics and tranquilizers. Bottom layers and sides at lake and rivers can be returned, after being cleaned. Machinery used for this purpose are much similar to those of deep sea mining. Removed waste items are being sorted by the process, and can be re used as raw material for other industrial production. Water filtration Nanofiltration is a relatively recent membrane filtration process used most often with low total dissolved solids water such as surface water and fresh groundwater, with the purpose of softening (polyvalent cation removal) and removal of disinfection by-product precursors such as natural organic matter and synthetic organic matter. Nanofiltration is also becoming more widely used in food processing applications such as dairy, for simultaneous concentration and partial (monovalent ion) demineralisation. Nanofiltration is a membrane filtration based method that uses nanometer sized cylindrical through-pores that pass through the membrane at a 90°. Nanofiltration membranes have pore sizes from 1-10 Angstrom, smaller than that used in microfiltration and ultrafiltration, but just larger than that in reverse osmosis. Membranes used are predominantly created from polymer thin films. Materials that are commonly used include polyethylene terephthalate or metals such as aluminum. Pore dimensions are controlled by pH, temperature and time during development with pore densities ranging from 1 to 106 pores per cm2. Membranes made from polyethylene terephthalate and other similar materials, are referred to as "track-etch" membranes, named after the way the pores on the membranes are made. "Tracking" involves bombarding the polymer thin film with high energy particles. This results in making tracks that are chemically developed into the membrane, or "etched" into the membrane, which are the pores. Membranes created from metal such as alumina membranes, are made by electrochemically growing a thin layer of aluminum oxide from aluminum in an acidic medium. Some water-treatment devices incorporating nanotechnology are already on the market, with more in development. Low-cost nanostructured separation membranes methods have been shown to be effective in producing potable water in a recent study. Nanotech to disinfect water Nanotechnology provides an alternative solution to clean germs in water, a problem that has been getting worse due to the population explosion, growing need for clean water and the emergence of additional pollutants. One of the alternatives offered is antimicrobial nanotechnology stated that several nanomaterials showed strong antimicrobial properties through diverse mechanisms, such as photocatalytic production of reactive oxygen species that damage cell components and viruses. There is also the case of the synthetically-fabricated nanometallic particles that produce antimicrobial action called oligodynamic disinfection, which can inactivate microorganisms at low concentrations. Commercial purification systems based on titanium oxide photocatalysis also currently exist and studies show that this technology can achieve complete inactivation of fecal coliforms in 15 minutes once activated by sunlight. There are four classes of nanomaterials that are employed for water treatment and these are dendrimers, zeolites, carbonaceous nanomaterials, and metals containing nanoparticles. The benefits of the reduction of the size of the metals (e.g. silver, copper, titanium, and cobalt) to the nanoscale such as contact efficiency, greater surface area, and better elution properties. Medicical values The plants have been known to possess various phytochemicals (secondary metabolites) which help them to protect themselves, these phytoehemicals since time immemorial have been used by humans for their medicinal needs. The microbes are developing resistant again multiple synthetic drugs, thus leading to the emergence of MDR (Multi Drug Resistant) strains of microbes, which pose a challenge to the modern drug system. To overcome this challenge, the nanoparticles synthesized using extracts of plant and plant parts have emerged as a hope. Many workers have reported that the nanoparticles synthesized using plant extracts have shown to exhibit enhanced medicinal properties as compared to the extract(s) alone. Cleaning up oil spills The U.S. Environmental Protection Agency (EPA) documents more than ten thousand oil spills per year. Conventionally, biological, dispersing, and gelling agents are deployed to remedy oil spills. Although, these methods have been used for decades, none of these techniques can retrieve the irreplaceable lost oil. However, nanowires can not only swiftly clean up oil spills but also recover as much oil as possible. These nanowires form a mesh that absorbs up to twenty times its weight in hydrophobic liquids while rejecting water with its water repelling coating. Since the potassium manganese oxide is very stable even at high temperatures, the oil can be boiled off the nanowires and both the oil and the nanowires can then be reused. In 2005, Hurricane Katrina damaged or destroyed more than thirty oil platforms and nine refineries. The Interface Science Corporation successfully launched a new oil remediation and recovery application, which used the water repelling nanowires to clean up the oil spilled by the damaged oil platforms and refineries. Removing plastics from oceans One innovation of green nanotechnology that is currently under development are nanomachines modeled after a bacterium bioengineered to consume plastics, Ideonella sakaiensis. These nano-machines are able to decompose plastics dozens of times faster than the bioengineered bacteria not only because of their increased surface area but also because the energy released from decomposing the plastic is used to fuel the nano-machines. Air pollution control In addition to water treatment and environmental remediation, nanotechnology is currently improving air quality. Nanoparticles can be engineered to catalyze, or hasten, the reaction to transform environmentally pernicious gases into harmless ones. For example, many industrial factories that produce large amounts harmful gases employ a type of nanofiber catalyst made of magnesium oxide (Mg2O) to purify dangerous organic substances in the smoke. Although chemical catalysts already exist in the gaseous vapors from cars, nanotechnology has a greater chance of reacting with the harmful substances in the vapors. This greater probability comes from the fact that nanotechnology can interact with more particles because of its greater surface area. Nanotechnology has been used to remediate air pollution including car exhaust pollution, and potentially greenhouse gases due to its high surface area. Based on research done by the Environmental Science Pollution Research International, nanotechnology can specifically help to treat carbon-based nanoparticles, greenhouse gases, and volatile organic compounds. There is also work being done to develop antibacterial nanoparticles, metal oxide nanoparticles, and amendment agents for phytoremediation processes. Nanotechnology can also give the possibility of preventing air pollution in the first place due to its extremely small scale. Nanotechnology has been accepted as a tool for many industrial and domestic fields like gas monitoring systems, fire and toxic gas detectors, ventilation control, breath alcohol detectors and many more. Other sources state that nanotechnology has the potential to develop the pollutants sensing and detection methods that already exist. The ability to detect pollutants and sense unwanted materials will be heightened by the large surface area of nanomaterials and their high surface energy. The World Health Organization declared in 2014 that air contamination caused around 7 million deaths in 2012. This new technology could be an essential asset to this epidemic. The three ways that nanotechnology is being used to treat air pollution are nano-adsorptive materials, degradation by nanocatalysis, and filtration/separation by nanofilters. Nanoscale adsorbents being the main alleviator for many air pollution difficulties. Their structure permits a great interaction with organic compounds as well as increased selectivity and stability in maximum adsorption capacity. Other advantages include high electrical and thermal conductivities, high strength, high hardness. Target pollutants that can be targeted by nanomolecules are 〖NO〗_x, 〖CO〗_2, 〖NH〗_3, N_2, VOCs, Isopropyl vapor, 〖CH〗_3 OH gases, N_2 O, H_2 S. Carbon nanotubes specifically remove particles in many ways. One method is by passing them through the nanotubes where the molecules are oxidized; the molecules then are adsorbed on a nitrate species. Carbon nanotubes with amine groups provide numerous chemical sites for carbon dioxide adsorption at low temperature ranges of 20°-100° degrees Celsius. Van der Waals forces and π-π interactions also are used to pull molecules onto surface functional groups. Fullerene can be used to rid of carbon dioxide pollution due to its high adsorption capacity. Graphene nanotubes have functional groups that adsorb gases. There are plenty of nanocatalysts that can be used for air pollution reduction and air quality. Some of these materials include 〖TiO〗_2, Vanadium, Platinum, Palladium, Rhodium, and Silver. Catalytic industrial emission reduction, car exhaust reduction, and air purification are just some of the major thrusts that these nanomaterials are being utilized within. Certain applications are not widely spread, but other are more popular. Indoor air pollution is barely on the market yet, but it is being developed more efficiently due to complications with health effects. Car exhaust emission reduction is widely used in diesel fueled automobiles currently being one of the more popular applications. Industrial emission reduction is also widely used. It is n integral method specifically at coal fired power plants as well as refineries. These methods are analyzed and reviewed using SEM imaging to ensure its usefulness and accuracy. Additionally, research is currently being conducted to find out if nanoparticles can be engineered to separate car exhaust from methane or carbon dioxide, which has been known to damage the Earth's ozone layer. In fact, John Zhu, a professor at the University of Queensland, is exploring the creation of a carbon nanotube(CNT) which can trap greenhouse gases hundreds of times more efficiently than current methods can. Nanotechnology for sensors Perpetual exposure to heavy metal pollution and particulate matter will lead to health concerns such as lung cancer, heart conditions, and even motor neuron diseases. However, humanity's ability to shield themselves from these health problems can be improved by accurate and swift nanocontact-sensors able to detect pollutants at the atomic level. These nanocontact sensors do not require much energy to detect metal ions or radioactive elements. Additionally, they can be made in automatic mode so that they can be readably used at any given moment. Additionally, these nanocontact sensors are energy and cost effective since they are composed with conventional microelectronic manufacturing equipment using electrochemical techniques. Some examples of nano-based monitoring include: Functionalized nanoparticles able to form anionic oxidants bonding thereby allowing the detection of carcinogenic substances at very low concentrations. Polymer nanospheres have been developed to measure organic contaminates in very low concentrations "Peptide nanoelectrodes have been employed based on the concept of thermocouple. In a 'nano-distance separation gap, a peptide molecule is placed to form a molecular junction. When a specific metal ion is bound to the gap; the electrical current will result conductance in a unique value. Hence the metal ion will be easily detected." Composite electrodes, a mixture of nanotubes and copper, have been created to detect substances such as organophosphorus pesticides, carbohydrates and other woods pathogenic substances in low concentrations. Concerns Although green nanotechnology poses many advantages over traditional methods, there is still much debate about the concerns brought about by nanotechnology. For example, since the nanoparticles are small enough to be absorbed into skin and/or inhaled, countries are mandating that additional research revolving around the impact of nanotechnology on organisms be heavily studied. In fact, the field of eco-nanotoxicology was founded solely to study the effect of nanotechnology on earth and all of its organisms. At the moment, scientists are unsure of what will happen when nanoparticles seep into soil and water, but organizations, such as NanoImpactNet, have set out to study these effects. See also Bioremediation Clean technology Environmental microbiology Green chemistry Industrial microbiology LifeSaver bottle NBI Knowledgebase Tata Swach References Further reading Evaluation of 'green' nanotechnology requires a full life cycle assessment Nano Flakes May Revolutionize Solar Cells External links Safer Nanomaterials and Nanomanufacturing Initiative Clean Tech Law & Business Project on Emerging Nanotechnologies Nanotechnology Lab National Nanotechnology Initiative The Berkeley Nanosciences and Nanoengineering Institute Nanotechnology: Green Manufacturing Nanotechnology Now "Can nanotechnology be green?" Folia Water – The Safe Water Book, containing 26 nanosilver-impregnated filter papers for water purification. Nanotechnology and the environment
Green nanotechnology
[ "Materials_science" ]
4,262
[ "Nanotechnology", "Nanotechnology and the environment" ]
16,065,426
https://en.wikipedia.org/wiki/Roe%20solver
The Roe approximate Riemann solver, devised by Phil Roe, is an approximate Riemann solver based on the Godunov scheme and involves finding an estimate for the intercell numerical flux or Godunov flux at the interface between two computational cells and , on some discretised space-time computational domain. Roe scheme Quasi-linear hyperbolic system A non-linear system of hyperbolic partial differential equations representing a set of conservation laws in one spatial dimension can be written in the form Applying the chain rule to the second term we get the quasi-linear hyperbolic system where is the Jacobian matrix of the flux vector . Roe matrix The Roe method consists of finding a matrix that is assumed constant between two cells. The Riemann problem can then be solved as a truly linear hyperbolic system at each cell interface. The Roe matrix must obey the following conditions: Diagonalizable with real eigenvalues: ensures that the new linear system is truly hyperbolic. Consistency with the exact jacobian: when we demand that Conserving: Phil Roe introduced a method of parameter vectors to find such a matrix for some systems of conservation laws. Intercell flux Once the Roe matrix corresponding to the interface between two cells is found, the intercell flux is given by solving the quasi-linear system as a truly linear system. See also Riemann solver References Further reading Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. Numerical differential equations Conservation equations
Roe solver
[ "Physics", "Mathematics" ]
309
[ "Conservation laws", "Mathematical objects", "Equations", "Conservation equations", "Symmetry", "Physics theorems" ]
3,028,181
https://en.wikipedia.org/wiki/Hasse%E2%80%93Witt%20matrix
In mathematics, the Hasse–Witt matrix H of a non-singular algebraic curve C over a finite field F is the matrix of the Frobenius mapping (p-th power mapping where F has q elements, q a power of the prime number p) with respect to a basis for the differentials of the first kind. It is a g × g matrix where C has genus g. The rank of the Hasse–Witt matrix is the Hasse or Hasse–Witt invariant. Approach to the definition This definition, as given in the introduction, is natural in classical terms, and is due to Helmut Hasse and Ernst Witt (1936). It provides a solution to the question of the p-rank of the Jacobian variety J of C; the p-rank is bounded by the rank of H, specifically it is the rank of the Frobenius mapping composed with itself g times. It is also a definition that is in principle algorithmic. There has been substantial recent interest in this as of practical application to cryptography, in the case of C a hyperelliptic curve. The curve C is superspecial if H = 0. That definition needs a couple of caveats, at least. Firstly, there is a convention about Frobenius mappings, and under the modern understanding what is required for H is the transpose of Frobenius (see arithmetic and geometric Frobenius for more discussion). Secondly, the Frobenius mapping is not F-linear; it is linear over the prime field Z/pZ in F. Therefore the matrix can be written down but does not represent a linear mapping in the straightforward sense. Cohomology The interpretation for sheaf cohomology is this: the p-power map acts on H1(C,OC), or in other words the first cohomology of C with coefficients in its structure sheaf. This is now called the Cartier–Manin operator (sometimes just Cartier operator), for Pierre Cartier and Yuri Manin. The connection with the Hasse–Witt definition is by means of Serre duality, which for a curve relates that group to H0(C, ΩC) where ΩC = Ω1C is the sheaf of Kähler differentials on C. Abelian varieties and their p-rank The p-rank of an abelian variety A over a field K of characteristic p is the integer k for which the kernel A[p] of multiplication by p has pk points. It may take any value from 0 to d, the dimension of A; by contrast for any other prime number l there are l2d points in A[l]. The reason that the p-rank is lower is that multiplication by p on A is an inseparable isogeny: the differential is p which is 0 in K. By looking at the kernel as a group scheme one can get the more complete structure (reference David Mumford Abelian Varieties pp. 146–7); but if for example one looks at reduction mod p of a division equation, the number of solutions must drop. The rank of the Cartier–Manin operator, or Hasse–Witt matrix, therefore gives an upper bound for the p-rank. The p-rank is the rank of the Frobenius operator composed with itself g times. In the original paper of Hasse and Witt the problem is phrased in terms intrinsic to C, not relying on J. It is there a question of classifying the possible Artin–Schreier extensions of the function field F(C) (the analogue in this case of Kummer theory). Case of genus 1 The case of elliptic curves was worked out by Hasse in 1934. Since the genus is 1, the only possibilities for the matrix H are: H is zero, Hasse invariant 0, p-rank 0, the supersingular case; or H non-zero, Hasse invariant 1, p-rank 1, the ordinary case. Here there is a congruence formula saying that H is congruent modulo p to the number N of points on C over F, at least when q = p. Because of Hasse's theorem on elliptic curves, knowing N modulo p determines N for p ≥ 5. This connection with local zeta-functions has been investigated in depth. For a plane curve defined by a cubic f(X,Y,Z) = 0, the Hasse invariant is zero if and only if the coefficient of (XYZ)p−1 in fp−1 is zero. Notes References (English translation of a Russian original) Algebraic curves Finite fields Matrices Complex manifolds
Hasse–Witt matrix
[ "Mathematics" ]
963
[ "Matrices (mathematics)", "Mathematical objects" ]
3,028,413
https://en.wikipedia.org/wiki/Hasse%20invariant%20of%20a%20quadratic%20form
In mathematics, the Hasse invariant (or Hasse–Witt invariant) of a quadratic form Q over a field K takes values in the Brauer group Br(K). The name "Hasse–Witt" comes from Helmut Hasse and Ernst Witt. The quadratic form Q may be taken as a diagonal form Σ aixi2. Its invariant is then defined as the product of the classes in the Brauer group of all the quaternion algebras (ai, aj) for i < j. This is independent of the diagonal form chosen to compute it. It may also be viewed as the second Stiefel–Whitney class of Q. Symbols The invariant may be computed for a specific symbol φ taking values in the group C2 = {±1}. In the context of quadratic forms over a local field, the Hasse invariant may be defined using the Hilbert symbol, the unique symbol taking values in C2. The invariants of a quadratic forms over a local field are precisely the dimension, discriminant and Hasse invariant. For quadratic forms over a number field, there is a Hasse invariant ±1 for every finite place. The invariants of a form over a number field are precisely the dimension, discriminant, all local Hasse invariants and the signatures coming from real embeddings. See also Hasse–Minkowski theorem References Quadratic forms
Hasse invariant of a quadratic form
[ "Mathematics" ]
293
[ "Quadratic forms", "Number theory" ]
3,028,468
https://en.wikipedia.org/wiki/Trimethylolpropane%20triacrylate
Trimethylolpropane triacrylate (TMPTA) is a trifunctional acrylate ester monomer derived from trimethylolpropane, used in the manufacture of plastics, adhesives, acrylic glue, anaerobic sealants, and ink. It is useful for its low volatility and fast cure response. It has the properties of weather, chemical and water resistance, as well as good abrasion resistance. End products include alkyd coatings, compact discs, hardwood floors, concrete and cementitious applications, Dental composites, photolithography, letterpress, screen printing, elastomers, automobile headlamps, acrylics and plastic components for the medical industry. Other uses As the molecule has acrylic functionality, it is capable of doing the Michael reaction with an amine. This allows its use in epoxy chemistry where its use speeds up the cure time considerably See also Pentaerythritol tetraacrylate 1,6-Hexanediol diacrylate References TRIMETHYLOLPROPANE TRIACRYLATE at chemicalland21.com Trimethylolpropane Triacrylate at OSHA Trimethylolpropane triacrylate CAS Number: 15625-89-5 at ntp.niehs.nih.gov Acrylate esters Monomers
Trimethylolpropane triacrylate
[ "Chemistry", "Materials_science" ]
297
[ "Monomers", "Polymer chemistry" ]
3,031,555
https://en.wikipedia.org/wiki/Goldberger%E2%80%93Wise%20mechanism
In particle physics, the Goldberger–Wise mechanism is a popular mechanism that determines the size of the fifth dimension in Randall–Sundrum models. The mechanism uses a scalar field that propagates throughout the five-dimensional bulk. On each of the branes that end the fifth dimension (frequently referred to as the Planck brane and TeV brane, respectively) there is a potential for this scalar field. The minima for the potentials on the Planck brane and TeV brane are different and causes the vacuum expectation value of the scalar field to change throughout the fifth dimension. This configuration generates a potential for the radion causing it to have a vacuum expectation value and a mass. With reasonable values for the scalar potential, the size of the extra dimension is large enough to solve the hierarchy problem. References Physics beyond the Standard Model
Goldberger–Wise mechanism
[ "Physics" ]
175
[ "Particle physics stubs", "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
3,031,585
https://en.wikipedia.org/wiki/Fuzzy%20sphere
In mathematics, the fuzzy sphere is one of the simplest and most canonical examples of non-commutative geometry. Ordinarily, the functions defined on a sphere form a commuting algebra. A fuzzy sphere differs from an ordinary sphere because the algebra of functions on it is not commutative. It is generated by spherical harmonics whose spin l is at most equal to some j. The terms in the product of two spherical harmonics that involve spherical harmonics with spin exceeding j are simply omitted in the product. This truncation replaces an infinite-dimensional commutative algebra by a -dimensional non-commutative algebra. The simplest way to see this sphere is to realize this truncated algebra of functions as a matrix algebra on some finite-dimensional vector space. Take the three j-dimensional square matrices that form a basis for the j dimensional irreducible representation of the Lie algebra su(2). They satisfy the relations , where is the totally antisymmetric symbol with , and generate via the matrix product the algebra of j dimensional matrices. The value of the su(2) Casimir operator in this representation is where is the j-dimensional identity matrix. Thus, if we define the 'coordinates' where r is the radius of the sphere and k is a parameter, related to r and j by , then the above equation concerning the Casimir operator can be rewritten as , which is the usual relation for the coordinates on a sphere of radius r embedded in three dimensional space. One can define an integral on this space, by where F is the matrix corresponding to the function f. For example, the integral of unity, which gives the surface of the sphere in the commutative case is here equal to which converges to the value of the surface of the sphere if one takes j to infinity. Notes Jens Hoppe, "Membranes and Matrix Models", lectures presented during the summer school on ‘Quantum Field Theory – from a Hamiltonian Point of View’, August 2–9, 2000, John Madore, An introduction to Noncommutative Differential Geometry and its Physical Applications, London Mathematical Society Lecture Note Series. 257, Cambridge University Press 2002 References J. Hoppe, Quantum Theory of a Massless Relativistic Surface and a Two dimensional Bound State Problem. PhD thesis, Massachusetts Institute of Technology, 1982. Mathematical quantization Noncommutative geometry
Fuzzy sphere
[ "Physics" ]
483
[ "Mathematical quantization", "Quantum mechanics" ]
5,534,333
https://en.wikipedia.org/wiki/Gauss%27s%20principle%20of%20least%20constraint
The principle of least constraint is one variational formulation of classical mechanics enunciated by Carl Friedrich Gauss in 1829, equivalent to all other formulations of analytical mechanics. Intuitively, it says that the acceleration of a constrained physical system will be as similar as possible to that of the corresponding unconstrained system. Statement The principle of least constraint is a least squares principle stating that the true accelerations of a mechanical system of masses is the minimum of the quantity where the jth particle has mass , position vector , and applied non-constraint force acting on the mass. The notation indicates time derivative of a vector function , i.e. position. The corresponding accelerations satisfy the imposed constraints, which in general depends on the current state of the system, . It is recalled the fact that due to active and reactive (constraint) forces being applied, with resultant , a system will experience an acceleration . Connections to other formulations Gauss's principle is equivalent to D'Alembert's principle. The principle of least constraint is qualitatively similar to Hamilton's principle, which states that the true path taken by a mechanical system is an extremum of the action. However, Gauss's principle is a true (local) minimal principle, whereas the other is an extremal principle. Hertz's principle of least curvature Hertz's principle of least curvature is a special case of Gauss's principle, restricted by the three conditions that there are no externally applied forces, no interactions (which can usually be expressed as a potential energy), and all masses are equal. Without loss of generality, the masses may be set equal to one. Under these conditions, Gauss's minimized quantity can be written The kinetic energy is also conserved under these conditions Since the line element in the -dimensional space of the coordinates is defined the conservation of energy may also be written Dividing by yields another minimal quantity Since is the local curvature of the trajectory in the -dimensional space of the coordinates, minimization of is equivalent to finding the trajectory of least curvature (a geodesic) that is consistent with the constraints. Hertz's principle is also a special case of Jacobi's formulation of the least-action principle. Philosophy Hertz designed the principle to eliminate the concept of force and dynamics, so that physics would consist exclusively of kinematics, of material points in constrained motion. He was critical of the "logical obscurity" surrounding the idea of force.I would mention the experience that it is exceedingly difficult to expound to thoughtful hearers that very introduction to mechanics without being occasionally embarrassed, without feeling tempted now and again to apologize, without wishing to get as quickly as possible over the rudiments, and on to examples which speak for themselves. I fancy that Newton himself must have felt this embarrassment...To replace the concept of force, he proposed that the acceleration of visible masses are to be accounted for, not by force, but by geometric constraints on the visible masses, and their geometric linkages to invisible masses. In this, he understood himself as continuing the tradition of Cartesian mechanical philosophy, such as Boltzmann's explaining of heat by atomic motion, and Maxwell's explaining of electromagnetism by ether motion. Even though both atoms and the ether were not observable except via their effects, they were successful in explaining apparently non-mechanical phenomena mechanically. In trying to explain away "mechanical force", Hertz was "mechanizing classical mechanics". See also Appell's equation of motion Literature References External links A modern discussion and proof of Gauss's principle Gauss principle in the Encyclopedia of Mathematics Hertz principle in the Encyclopedia of Mathematics Classical mechanics
Gauss's principle of least constraint
[ "Physics" ]
763
[ "Mechanics", "Classical mechanics" ]
5,534,425
https://en.wikipedia.org/wiki/Transverse%20mass
The transverse mass is a useful quantity to define for use in particle physics as it is invariant under Lorentz boost along the z direction. In natural units, it is: where the z-direction is along the beam pipe and so and are the momentum perpendicular to the beam pipe and is the (invariant) mass. This definition of the transverse mass is used in conjunction with the definition of the (directed) transverse energy with the transverse momentum vector . It is easy to see that for vanishing mass () the three quantities are the same: . The transverse mass is used together with the rapidity, transverse momentum and polar angle in the parameterization of the four-momentum of a single particle: Using these definitions (in particular for ) gives for the mass of a two particle system: Looking at the transverse projection of this system (by setting ) gives: These are also the definitions that are used by the software package ROOT, which is commonly used in high energy physics. Transverse mass in two-particle systems Hadron collider physicists use another definition of transverse mass (and transverse energy), in the case of a decay into two particles. This is often used when one particle cannot be detected directly but is only indicated by missing transverse energy. In that case, the total energy is unknown and the above definition cannot be used. where is the transverse energy of each daughter, a positive quantity defined using its true invariant mass as: , which is coincidentally the definition of the transverse mass for a single particle given above. Using these two definitions, one also gets the form: (but with slightly different definitions for !) For massless daughters, where , we again have , and the transverse mass of the two particle system becomes: where is the angle between the daughters in the transverse plane. The distribution of has an end-point at the invariant mass of the system with . This has been used to determine the mass at the Tevatron. References - See sections 38.5.2 () and 38.6.1 () for definitions of transverse mass. - See sections 43.5.2 () and 43.6.1 () for definitions of transverse mass. Particle physics Kinematics Special relativity
Transverse mass
[ "Physics", "Technology" ]
446
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics", "Physical systems", "Special relativity", "Motion (physics)", "Mechanics", "Particle physics", "Theory of relativity", "Particle physics stubs" ]
5,534,558
https://en.wikipedia.org/wiki/Design%20for%20assembly
Design for assembly (DFA) is a process by which products are designed with ease of assembly in mind. If a product contains fewer parts it will take less time to assemble, thereby reducing assembly costs. In addition, if the parts are provided with features which make it easier to grasp, move, orient and insert them, this will also reduce assembly time and assembly costs. The reduction of the number of parts in an assembly has the added benefit of generally reducing the total cost of parts in the assembly. This is usually where the major cost benefits of the application of design for assembly occur. Approaches Design for assembly can take different forms. In the 1960s and 1970s various rules and recommendations were proposed in order to help designers consider assembly problems during the design process. Many of these rules and recommendations were presented together with practical examples showing how assembly difficulty could be improved. However, it was not until the 1970s that numerical evaluation methods were developed to allow design for assembly studies to be carried out on existing and proposed designs. The first evaluation method was developed at Hitachi and was called the Assembly Evaluation Method (AEM). This method is based on the principle of "one motion for one part." For more complicated motions, a point-loss standard is used and the ease of assembly of the whole product is evaluated by subtracting points lost. The method was originally developed in order to rate assemblies for ease of automatic assembly. Starting in 1977, Geoff Boothroyd, supported by an NSF grant at the University of Massachusetts Amherst, developed the Design for Assembly method (DFA), which could be used to estimate the time for manual assembly of a product and the cost of assembling the product on an automatic assembly machine. Recognizing that the most important factor in reducing assembly costs was the minimization of the number of separate parts in a product, he introduced three simple criteria which could be used to determine theoretically whether any of the parts in the product could be eliminated or combined with other parts. These criteria, together with tables relating assembly time to various design factors influencing part grasping, orientation and insertion, could be used to estimate total assembly time and to rate the quality of a product design from an assembly viewpoint. For automatic assembly, tables of factors could be used to estimate the cost of automatic feeding and orienting and automatic insertion of the parts on an assembly machine. In the 1980s and 1990s, variations of the AEM and DFA methods have been proposed, namely: the GE Hitachi method which is based on the AEM and DFA; the Lucas method, the Westinghouse method and several others which were based on the original DFA method. All methods are now referred to as design for assembly methods. Implementation Most products are assembled manually and the original DFA method for manual assembly is the most widely used method and has had the greatest industrial impact throughout the world. The DFA method, like the AEM method, was originally made available in the form of a handbook where the user would enter data on worksheets to obtain a rating for the ease of assembly of a product. Starting in 1981, Geoffrey Boothroyd and Peter Dewhurst developed a computerized version of the DFA method which allowed its implementation in a broad range of companies. For this work they were presented with many awards including the National Medal of Technology. There are many published examples of significant savings obtained through the application of DFA. For example, in 1981, Sidney Liebson, manager of manufacturing engineering for Xerox, estimated that his company would save hundreds of millions of dollars through the application of DFA. In 1988, Ford Motor Company credited the software with overall savings approaching $1 billion. In many companies DFA is a corporate requirement and DFA software is continually being adopted by companies attempting to obtain greater control over their manufacturing costs. There are many key principles in design for assembly. Notable examples Two notable examples of good design for assembly are the Sony Walkman and the Swatch watch. Both were designed for fully automated assembly. The Walkman line was designed for "vertical assembly", in which parts are inserted in straight-down moves only. The Sony SMART assembly system, used to assemble Walkman-type products, is a robotic system for assembling small devices designed for vertical assembly. The IBM Proprinter used design for automated assembly (DFAA) rules. These DFAA rules help design a product that can be assembled automatically by robots, but they are useful even with products assembled by manual assembly. See also Design for inspection Design for manufacturability Design for X Design for verification DFMA Notes Further information For more information on Design for Assembly and the subject of Design for Manufacture and Assembly see: Boothroyd, G. "Assembly Automation and Product Design, 2nd Edition", Taylor and Francis, Boca Raton, Florida, 2005. Boothroyd, G., Dewhurst, P. and Knight, W., "Product Design for Manufacture and Assembly, 2nd Edition", Marcel Dekker, New York, 2002. External links "Successful Design for Assembly" - February 26, 2007 article from Assembly Magazine Product development Design Design for X
Design for assembly
[ "Engineering" ]
1,043
[ "Design", "Design for X" ]
5,534,701
https://en.wikipedia.org/wiki/Nuclear%20power%20by%20country
Nuclear power plants operate in 32 countries and generate about a tenth of the world's electricity. Most are in Europe, North America and East Asia. The United States is the largest producer of nuclear power, while France has the largest share of electricity generated by nuclear power, at about 70%. Some countries operated nuclear reactors in the past but have no operating nuclear power plants at present. Among them, Italy closed all of its nuclear stations by 1990 and nuclear power has since been discontinued because of the 1987 referendums. Kazakhstan phased out nuclear power in 1999 but is planning to reintroduce it possibly by 2035 under referendum. Germany operated nuclear plants since 1960 until the completion of its phaseout policy in 2023. Austria (Zwentendorf Nuclear Power Plant) and the Philippines (Bataan Nuclear Power Plant) never started to use their first nuclear plants that were completely built. Sweden and Belgium originally had phase-out policies however they have now moved away from their original plans. The Philippines relaunched their nuclear programme on February 28, 2022 and may try to operate the 1984 mothballed Bataan Plant. As of 2020, Poland was in advanced planning phase for 1.5 GW and planned to have up to 9 GW by 2040. Hong Kong has no nuclear power plants within its boundary, but imports 80% of the electricity generated from Daya Bay Nuclear Power Station located across the border, in which the power company of the territory holds stake. In 2021, Iraq declared it was planning to build 8 nuclear reactors by 2030 to supply up to 25% electric power in a grid that was suffering from shortages. Overview Of the 32 countries in which nuclear power plants operate, only France, Slovakia, Ukraine and Belgium use them as the source for a majority of the country's electricity supply as of 2021. Other countries have significant amounts of nuclear power generation capacity. By far the largest nuclear electricity producers are the United States with 779,186 GWh of nuclear electricity in 2023, followed by China with 406,484 GWh. As of the end of 2023, 418 reactors with a net capacity of 371,540 MWe were operational, and 59 reactors with net capacity of 61,637 MWe were under construction. Of the reactors under construction, 25 reactors with 26,301 MWe were in China and 7 reactors with a capacity of 5,398 MWe were in India. See also List of commercial nuclear reactors List of nuclear power stations Nuclear energy policy by country List of nuclear power accidents by country List of countries by uranium reserves World Nuclear Industry Status Report Notes References External links World Nuclear Generation and Capacity Nuclear technology
Nuclear power by country
[ "Physics" ]
539
[ "Nuclear technology", "Nuclear physics" ]
5,534,844
https://en.wikipedia.org/wiki/Markov%20strategy
In game theory, a Markov strategy is one that depends only on state variables that summarize the history of the game in one way or another. For instance, a state variable can be the current play in a repeated game, or it can be any interpretation of a recent sequence of play. A profile of Markov strategies is a Markov perfect equilibrium if it is a Nash equilibrium in every state of the game. The Markov strategy was invented by Andrey Markov. References Game theory
Markov strategy
[ "Mathematics" ]
101
[ "Game theory", "Strategy (game theory)" ]
5,535,224
https://en.wikipedia.org/wiki/Weigh%20lock
A weigh lock is a specialized canal lock designed to determine the weight of barges in order to assess toll payments based upon the weight and value of the cargo carried. This requires that the unladen weight of the barge be known. A barge to be weighed was brought into a supporting cradle connected by levers to a weighing mechanism. The water was then drained and the scale balance adjusted to determine the barge gross weight. Subtracting the tare weight (the weight of the barge when empty) would give the cargo weight. Originally weighlocks measured the weight of the barge, initially by measuring the displacement of water from the lock by collecting the liquid in a separate measuring chamber after the barge had entered. This method also requires that the unladen weight of the barge be known. See also Weigh bridge, a device for weighing trucks and railcars. References External links Erie Canal — The Weigh Lock Locks (water navigation) Weighing instruments
Weigh lock
[ "Physics", "Technology", "Engineering" ]
189
[ "Weighing instruments", "Mass", "Matter", "Measuring instruments" ]
5,536,298
https://en.wikipedia.org/wiki/Blood%E2%80%93air%20barrier
The blood–air barrier or air–blood barrier, (alveolar–capillary barrier or membrane) exists in the gas exchanging region of the lungs. It exists to prevent air bubbles from forming in the blood, and from blood entering the alveoli. It is formed by the type I pneumocytes of the alveolar wall, the endothelial cells of the capillaries and the basement membrane between. The barrier is permeable to molecular oxygen, carbon dioxide, carbon monoxide and many other gases. Structure This blood–air barrier is extremely thin (approximately 600 nm-2μm; in some places merely 200 nm) to allow sufficient oxygen diffusion, yet it is extremely strong. This strength comes from the type IV collagen in between the endothelial and epithelial cells. Damage can occur to this barrier at a pressure difference of around . Clinical significance Failure of the barrier may occur in a pulmonary barotrauma. This can be a result of several possible causes, including blast injury, swimming-induced pulmonary edema, and breathing gas entrapment or retention in the lung during depressurization, which can occur during ascent from underwater diving or loss of pressure from a pressurized vehicle, habitat or pressure suit. Possible consequences of rupture of the blood–air barrier include arterial gas embolism and hemoptysis. See also References External links – "Mammal, lung vasculature (EM, High)" Respiratory system Underwater diving physiology
Blood–air barrier
[ "Biology" ]
309
[ "Organ systems", "Respiratory system" ]
5,537,399
https://en.wikipedia.org/wiki/Protein%20filament
In biology, a protein filament is a long chain of protein monomers, such as those found in hair, muscle, or in flagella. Protein filaments form together to make the cytoskeleton of the cell. They are often bundled together to provide support, strength, and rigidity to the cell. When the filaments are packed up together, they are able to form three different cellular parts. The three major classes of protein filaments that make up the cytoskeleton include: actin filaments, microtubules and intermediate filaments. Cellular types Microfilaments Compared to the other parts of the cytoskeletons, the microfilaments contain the thinnest filaments, with a diameter of approximately 7 nm. Microfilaments are part of the cytoskeleton that are composed of protein called actin. Two strands of actin intertwined together form a filamentous structure allowing for the movement of motor proteins. Microfilaments can either occur in the monomeric G-actin or filamentous F-actin. Microfilaments are important when it comes to the overall organization of the plasma membrane. Actin filaments are considered to be both helical and flexible. They are composed of several actin monomers chained together which add to their flexibility. They are found in several places in the body including the microvilli, contractile rings, stress fibers, cellular cortex, etc. In a contractile ring, actin have the ability to help with cellular division while in the cellular cortex they can help with the structural integrity of the cell.       Microfilament Polymerization Microfilament polymerization is divided into three steps. The nucleation step is the first step, and it is the rate limiting and slowest step of the process. Elongation is the next step in this process, and it is the rapid addition of actin monomers at both the plus and minus end of the microfilament. The final step is the steady state. At this state the addition of monomers will equal the subtraction of monomers causing the microfilament to no longer grow. This is known as the critical concentration of actin. There are several toxins that have been known to limit the polymerization of actin. Cytochalasin is a toxin that will bind to the actin polymer, so it can no longer bind to the incoming actin monomers. Actin originally attached in the polymer is still leaving the microfilament causing depolymerization. Phalloidin is a toxin that will bind to actin locking the filament in place. Monomers are neither adding or leaving this polymer which causes the stabilization of the molecule. Latrunculin is similar to cytochalasin, but it is a toxin which will bind to the actin monomers preventing it from adding onto the actin polymer. This will cause the depolymerization of the actin polymer in the cell. Actin Based Motor Protein- Myosin There are several different proteins that interact with actin in the body. However, one of the most famous types of motor proteins is myosin. Myosin will bind to these actins causing the movement of actin. This movement of myosin along the microfilament can cause muscle contraction, membrane association, endocytosis, and organelle transport. The actin microfilament is composed of three bands and one disk. The A band is the part of the actin that will bind to the myosin during muscle contraction. The I band is the part of the actin that is not bound to the myosin, but it will still move during muscle contraction. The H zone is the space in between two adjacent actin that will shrink when the muscle begins to contract. The Z disk is the part of the microfilament that characterizes the overall end of each side of the sarcomere, a structural unit of a myofibril. Proteins Limiting Microfilaments These microfilaments have the potential to be limited by several factors or proteins. Tropomodulin is a protein that will cap the ends of the actin filaments causing the overall stability of the structure. Nebulin is another protein that can bind to the sides of the actin preventing the attachment of myosin to them. This causes stabilization of the actin limiting muscle contraction. Titin is another protein, but it binds to the myosin rather than the actin microfilament. Titin will help stabilize the contraction and myosin-actin structure. Microtubules Microtubules are the largest type of filament, with a diameter of 25 nm wide, in the cytoskeleton. A single microtubule consists of 13 linear microfilaments. Unlike microfilaments, microtubules are composed of a protein called tubulin. The tubulin consists of dimers, named either "αβ-tubulin" or "tubulin dimers", which polymerize to form the microtubules. These microtubules are structurally quantified into three main groups: singlets, doublets, and triplets. Singlets are microtubule structures that are known to be found in the cytoplasm. Doublets are structures found in the cilia and flagella. Triplets are found in the basal bodies and centrioles. There are two main populations of these microtubules. There are unstable short-lived microtubules that will assemble and disassemble rapidly. The other population are stable long-lived microtubules. These microtubules will remain polymerized for longer periods of time and can be found in flagella, red blood cells, and nerve cells.  Microtubules have the ability to play a significant role in the organization of organelles and vesicles, beating of cilia and flagella, nerve and red blood cell structure, and alignment/ separation of chromosomes during mitosis and meiosis. Orientation in Cells When a cell is in the interphase process, microtubules tend to all orient the same way. Their negatively charged end will be close to the nucleus of the cell, while their positively end will be oriented away from the cell body. The basal body found within the cell helps the microtubule to orient in this specific fashion. In mitotic cells, they will see similar orientation as the positively charged end will be orientated away from the cell while the negatively charged end will be towards the Microtubule Organizing Center (MTOC). The positive end of these microtubules will attach to the kinetochore on the chromosome allowing for cellular division when applicable. Nerve cells tend to be a different from these other two forms of orientation. In an axon nerve cell, microtubules will arrange with their negatively charged end toward the cell body and positively charged end away from the cell body. However, in dendrites, microtubules can have a different orientation. In dendrites, microtubules can have their positively charged end toward the cell body, but their negatively charged end will likely be away from the cell body. Drugs Disrupting Microtubules Colchicine is an example of a drug that has been known to be used as a microtubule inhibitor. It binds to both the α and β tubulin on dimers in microtubules. At low concentrations this can cause stabilization of microtubules, but at high concentrations it can lead to depolymerization of microtubules. Taxol is another drug often times used to help treat breast cancer through targeting microtubules. Taxol binds to the side of a tubule and can lead to disruption in cell division. Role in Cellular Division There are three main type of microtubules involved with cellular division. Astral microtubules are those extending out of the centrosome toward the cell cortex. They can connect to the plasma membrane via cortical landmark deposits. These deposits are determined via polarity cues, growth and differentiation factors, or adhesion contacts. Polar microtubules will extend toward the middle of the cell and overlap the equator where the cell is dividing. Kinetochore microtubules will extend and bind to the kinetochore on the chromosomes assisting in the division of a cell. These microtubules will attach to the kinetochore at their positive end. NDC80 is a protein found at this binding point that will help with the stabilization of this interaction during cellular division. During the cellular division process, the overall microtubule length will not change. It will however produce a tread-milling effect that can cause the separation of these chromosomes. Intermediate filaments Intermediate filaments are part of the cytoskeleton structure found in most eukaryotic cells. An example of an intermediate filament is a Neurofilament. They provide support for the structure of the axon and are a major part of the cytoskeleton. Intermediate filaments contain an average diameter of 10 nm, which is smaller than that of microtubules, but larger than that of microfilaments. These 10 nm filaments are made up of polypeptide chains, which belong to the same family as intermediate filaments. Intermediate filaments are not involved with the direct movement of cells unlike microtubules and microfilaments. Intermediate filaments can play a role in cell communication in a process known as crosstalk. This cross talk has the potential to help with the mechanosensing. This mechanosensing can help protect the cell during cellular migration within the body. They can also help with the linkage of actin and microtubules to the cytoskeleton which will lead to the eventual movement and division of cells. Lastly these intermediate filaments have the ability to help with vascular permeability through organizing continuous adherens junctions through plectin cross-linking. Classification of Intermediate Filaments Intermediate filaments are composed of several proteins unlike microfilaments and microtubules which are composed of primarily actin and tubulin. These proteins have been classified into 6 major categories based on their similar characteristics. Type 1 and 2 intermediate filaments are those that are composed of keratins, and they are mainly found in epithelial cells. Type 3 intermediate filaments contain vimentin. They can be found in a variety of cells which include smooth muscle cells, fibroblasts, and white blood cells. Type 4 intermediate filaments are the neurofilaments found in neurons. They can be found in many different motor axons supporting these cells. Type 5 intermediate filaments are composed of nuclear lamins which can be found in the nuclear envelope of many eukaryotic cells. They will help to assemble an orthogonal network in these cells in the nuclear membrane. Type 6 intermediate filaments are involved with nestin that interact with the stem cells of central nervous system. References Protein structure
Protein filament
[ "Chemistry" ]
2,284
[ "Protein structure", "Structural biology" ]
5,538,293
https://en.wikipedia.org/wiki/Antioxidant%20effect%20of%20polyphenols%20and%20natural%20phenols
A polyphenol antioxidant is a hypothesized type of antioxidant studied in vitro. Numbering over 4,000 distinct chemical structures mostly from plants, such polyphenols have not been demonstrated to be antioxidants in vivo. In vitro at high experimental doses, polyphenols may affect cell-to-cell signaling, receptor sensitivity, inflammatory enzyme activity or gene regulation. None of these hypothetical effects has been confirmed in humans by high-quality clinical research, . Sources of polyphenols The main source of polyphenols is dietary, since they are found in a wide array of phytochemical-bearing foods. For example, honey; most legumes; fruits such as apples, blackberries, blueberries, cantaloupe, pomegranate, cherries, cranberries, grapes, pears, plums, raspberries, aronia berries, and strawberries (berries in general have high polyphenol content) and vegetables such as broccoli, cabbage, celery, onion and parsley are rich in polyphenols. Red wine, chocolate, black tea, white tea, green tea, olive oil and many grains are sources. Ingestion of polyphenols occurs by consuming a wide array of plant foods. Biochemical theory The regulation theory considers a polyphenolic ability to scavenge free radicals and up-regulate certain metal chelation reactions. Various reactive oxygen species, such as singlet oxygen, peroxynitrite and hydrogen peroxide, must be continually removed from cells to maintain healthy metabolic function. Diminishing the concentrations of reactive oxygen species can have several benefits possibly associated with ion transport systems and so may affect redox signaling. There is no substantial evidence, however, that dietary polyphenols have an antioxidant effect in vivo. The “deactivation” of oxidant species by polyphenolic antioxidants (POH) is based, with regard to food systems that are deteriorated by peroxyl radicals (R•), on the donation of hydrogen, which interrupts chain reactions: R• + PhOH → R-H + PhO• Phenoxyl radicals (PO•) generated according to this reaction may be stabilized through resonance and/or intramolecular hydrogen bonding, as proposed for quercetin, or combine to yield dimerisation products, thus terminating the chain reaction: PhO• + PhO•→ PhO-OPh Potential biological consequences Consuming dietary polyphenols have been evaluated for biological activity in vitro, but there is no evidence from high-quality clinical research that they have effects in vivo. Preliminary research has been conducted and regulatory status was reviewed in 2009 by the U.S. Food and Drug Administration (FDA), with no recommended intake values established, indicating absence of proof for nutritional value. Other possible effects may result from consumption of foods rich in polyphenols, but are not yet proved scientifically in humans; accordingly, health claims on food labels are not allowed by the FDA. Difficulty in analyzing effects of specific chemicals It is difficult to evaluate the physiological effects of specific natural phenolic antioxidants, since such a large number of individual compounds may occur even in a single food and their fate in vivo cannot be measured. Other more detailed chemical research has elucidated the difficulty of isolating individual phenolics. Because significant variation in phenolic content occurs among various brands of tea, there are possible inconsistencies among epidemiological studies implying beneficial health effects of phenolic antioxidants of green tea blends. The Oxygen Radical Absorbance Capacity (ORAC) test is a laboratory indicator of antioxidant potential in foods and dietary supplements. However, ORAC results cannot be confirmed to be physiologically applicable and have been designated as unreliable. Practical aspects of dietary polyphenols There is debate regarding the total body absorption of dietary intake of polyphenolic compounds. While some indicate potential health effects of certain specific polyphenols, most studies demonstrate low bioavailability and rapid excretion of polyphenols, indicating their potential roles only in small concentrations in vivo. More research is needed to understand the interactions between a variety of these chemicals acting in concert within the human body. Topical application of polyphenols There is no substantial evidence that reactive oxygen species play a role in the process of skin aging. The skin is exposed to various exogenous sources of oxidative stress, including ultraviolet radiation whose spectral components may be responsible for the extrinsic type of skin aging, sometimes termed photoaging. Controlled long-term studies on the efficacy of low molecular weight antioxidants in the prevention or treatment of skin aging in humans are absent. Combination of antioxidants in vitro Experiments on linoleic acid subjected to 2,2′-azobis (2-amidinopropane) dihydrochloride-induced oxidation with different combinations of phenolics show that binary mixtures can lead to either a synergetic effect or to an antagonistic effect. Antioxidant levels of purified anthocyanin extracts were much higher than expected from anthocyanin content indicating synergistic effect of anthocyanin mixtures. Antioxidant capacity tests Oxygen radical absorbance capacity (ORAC) Ferricyanide reducing power 2,2-diphenyl-1-picrylhydrazyl radical scavenging activity See also List of phytochemicals in food List of antioxidants in food Health effects of polyphenols Free-radical theory Nitric oxide Resveratrol Astaxanthin References Angiology Chemopreventive agents Antioxidants
Antioxidant effect of polyphenols and natural phenols
[ "Chemistry" ]
1,203
[ "Pharmacology", "Chemopreventive agents" ]
5,539,109
https://en.wikipedia.org/wiki/Miroslav%20Fiedler
Miroslav Fiedler (7 April 1926 – 20 November 2015) was a Czech mathematician known for his contributions to linear algebra, graph theory and algebraic graph theory. His article, "Algebraic Connectivity of Graphs", published in the Czechoslovak Math Journal in 1973, established the use of the eigenvalues of the Laplacian matrix of a graph to create tools for measuring algebraic connectivity in algebraic graph theory. Fiedler is honored by the Fiedler eigenvalue (the second smallest eigenvalue of the graph Laplacian), with its associated Fiedler eigenvector, as the names for the quantities that characterize algebraic connectivity. Since Fiedler's original contribution, this structure has become essential to large areas of research in network theory, flocking, distributed control, clustering, multi-robot applications and image segmentation. References External links Home page at the Academy of Sciences of the Czech Republic. 1926 births 2015 deaths Mathematicians from Prague Czech mathematicians Graph theorists Recipients of Medal of Merit (Czech Republic) Combinatorialists Charles University alumni
Miroslav Fiedler
[ "Mathematics" ]
220
[ "Graph theory", "Combinatorics", "Combinatorialists", "Mathematical relations", "Graph theorists" ]